Search This Blog

12 October 2017

Remote debugging with Stunt developer analytics

    Lets look into an example - when developing applications for Android and you want to debug your application remote on some device that is in another location.
    You could come up with a solution to send commands to a remote machine which will build and install your application on a device connected to that remote machine and then to forward the output from Logcat to your machine and look at it in real time. But. And it is a capital case "BUT", because all that means that you need an experienced person on the remote machine that can install and keep up to date and functioning the complete setup that is needed for a development environment. And that is not your goal, your goal is to just have remote debugging and for that you need the Logcat output, and most probably not all of it, but just the part that is produced by your application.
    No problem you can have that with Stunt developer analytics. The remote device does not need to be connected to a computer. You just need someone running your application and performing the steps to reproduce the issue. You don't even need to wait for an issue and have steps for it, you just need to have you application setup with Stunt and you would know there is an issue by looking at the output you get in the Stunt web application.
    In addition to the Stunt web application there are clients for Android and iOS, but you can use the web application with Stunt clients for other types of applications JavaScript, web applications, what ever you want, just create a client implementation for them and use them with the Stunt web application.

Stunt developer analytics project description:

Stunt web application GitHub repository:

Stunt Android client GitHub repository:

Stunt iOS client GitHub repository:

21 September 2017

Digital life

    Let’s code a little JavaScript project.
We have a world/ labyrinth / map and some creatures that roam in it. The world is described with walls and movable space. The creatures can use only the movable space to move and cannot move through the walls. The creatures have a field of vision of 1 square around them.

We will use some symbols to denote each one of the objects in our world:
    wall - *
    movable space - white space (space " ")
    creature - o

We will store the world in a two dimensional array. This way we can denote a location in the world with:

We could have used a one dimensional array. Then a location in the world would be:
    index = y * width + x

Here is the small repository of the project:

09 September 2017

Marvin framework on Android

    If you remember our Sentinel application, we considered using Marvin Framework for the image comparison, but it uses some Java packages that are not available on Android.
    Now we take a second look at it.
    Well it turns out that we don't want to port it, because it is using awt and that is a lot of work that we don't want to do right now.
    But here is an example of using Marvin Framework on Android. We are using the GrayScale filter. We have updated the instance variable "image" of the MarvingImage class to use a Bitmap instead of a BufferedImage:
// Image 
protected Bitmap image; 
And here is the code to apply a GrayScale filter:
1:          File fileOut = new File(getCacheDir(), "image.png");  
2:          BufferedInputStream inputStream = null;  
3:          FileOutputStream fos = null;  
4:          try {  
5:            inputStream = new BufferedInputStream(getAssets().open("ic_launcher.png"));  
6:            fos = new FileOutputStream(fileOut);  
7:            byte buffer[] = new byte[8192];  
8:            int length = 0;  
9:            while((length = > 0) {  
10:              fos.write(buffer, 0, length);  
11:            }  
12:            fos.flush();  
13:          } catch (IOException e) {  
14:            e.printStackTrace();  
15:          } finally {  
16:            if(inputStream != null) {  
17:              try {inputStream.close();} catch (IOException e) {/* do nothing */}  
18:            }  
19:            if(fos != null) {  
20:              try {fos.close();} catch (IOException e) {/* do nothing */}  
21:            }  
22:          }  
24:          String filePath = fileOut.getPath();  
25:          Log.i(TAG, "filePath=" + filePath + ", absolutePath=" + fileOut.getAbsolutePath());  
27:          //Change the ivar  
28:          // BufferedImage image  
29:          //int MarvinImage with Bitmap  
30:          Bitmap myBitmap = BitmapFactory.decodeFile(filePath);  
31:          int[] pixels = new int[myBitmap.getWidth() * myBitmap.getHeight()];  
32:          myBitmap.getPixels(pixels, 0, myBitmap.getWidth(), 0, 0, myBitmap.getWidth(), myBitmap.getHeight());  
33:          MarvinImage image = new MarvinImage(myBitmap.getWidth(), myBitmap.getHeight());  
34:          image.setIntColorArray(pixels);  
36:          GrayScale grayScale = new GrayScale();  
37:          grayScale.load();  
38:          MarvinImagePlugin imagePlugin = grayScale;  
39:          //MarvinImagePlugin imagePlugin = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.grayScale.jar");  
40:          imagePlugin.process(image, image);  
41:          image.update();  
43:          //implement the  
44:          // MarvinImageIO->loadImage  
45:          // MarvinImageIO->saveImage  
46:          //so that they don't rely on the javax package  
47:  //        MarvinImage image = MarvinImageIO.loadImage(filePath);  
48:  //        MarvinImagePlugin imagePlugin = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.color.grayScale.jar");  
49:  //        imagePlugin.process(image, image);  
50:  //        image.update();  
52:          int[] colorArray = image.getIntColorArray();  
53:          Bitmap bitmap = Bitmap.createBitmap(colorArray, myBitmap.getWidth(), myBitmap.getHeight(), Bitmap.Config.ARGB_8888);  
54:          mImageViewMain.setBackgroundDrawable(new BitmapDrawable(bitmap));  

Building the application

    When I am working with a technology, I am like - how other people think and say it is difficult, it is easy stuff.
    And when I look in to a new language or technology stack, I am like - oh I have all these tools and things, now I understand why someone would say it is difficult, because it is.
    And after a few days or weeks when I am over the hump, I am again to my old self - it is not difficult at all, I can't believe I considered it to be difficult, and how can anyone say it is.
Keep hustling.

07 August 2017

Shortcut application

    You have a nice website or web application that is responsive and all, but you also want presence on the Google app store. There are solutions to package your nice web content or application into an Android application, but we consider this would put you into a disadvantage. You either build a native application or just direct users to your web application that is build to run in the browser, that is were its strength is. But still you want present on the store. Well then just create an application that will redirect users to your web application in the browser of the device.
    Here is an example in the link. You could modify it by changing the icons, the application name and the URL it opens in the browser and you are good to build and upload to the store:

30 April 2017

EndorRouting - pathfinding between planets in a solar system

I came across this interesting challenge 

It seems more of a Math Wars than Star Wars, than their previous one (solution for the Quest Wars I challenge - Store Locations), but here goes nothing :)


GitHub repository:
there you can find the source code for the Android application, as an Android Studio project.

As we have a solar system, we are using a polar coordinate system.
We can find the implementation of the routing algorithm in source file, method: getRouteMinimax

In the same source file we can also find other implementations of routing calculations, but as we have commented in the source, they are either not optimal; a lot less correct or both.

Of course the shortest path between two planets would be a straight line. In "figure 1" we see this route, but while we are traveling in our space ship, the planet keeps traveling on its orbit as well and by the time we reach the planet we see that it has moved and is no longer where we made our route - "figure 2".
We can constantly keep updating our route while we travel - "figure 3". But this means a bit longer path. Instead we calculate where would the planet be at the time that it takes us to cover the route and make our route to that location - "figure 4". We are calculating it to 1 kilometer, but we can calculate it to as much or as little as we need it.

 figure 1

figure 2

figure 3

figure 4

We have lavishly used floats and doubles, but if we want to speed things up we have to consider where we can sacrifice on precision and use integers instead.
The Minimax calculations are performed on a single thread and as we know most of the star citizens use multi-core mobile devices that have at least 2 or 4 cores and the marketing people sent on new market planets are bound to have the newest and flashiest CPUs with more than 8 cores, so updating that part of the calculations to take advantage of multi-threading would bring a big benefit.

Please do not install the software on your space ships yet, because it is not production ready. It has small pieces missing like validation of input and such. It is just a prototype.


 (Solar system -
The movement is updated only 4 times a second. If we want it to be smoother, we just need to update it more times per second)

26 February 2017

Stunt remote logging and analytics update

We have updated the Stunt reporting server repository with a Postman collection for the report request that the server accepts from clients:

We have also added a quick and simple Stunt client implementation for iOS:

01 February 2017

Complicated, by design

Or as I say why should it be simple, easy and fast when it can be complicated, difficult and slow.

    'Simple, readable easy to maintain implementation.'
    'Complicated, hard to read and follow, difficult to maintain implementation.'
    It is not hard to say which one of both is better and preferred. But there is something that constrains and partially drives the implementation and that is the design.
-if we have simple system design it is easy to have the first kind of implementation.
-if we have somewhat complicated design it is less simple to have the first kind of implementation, but it is still possible.
-if we have an overly and unnecessarily complicated design, then it is almost impossible to have the first kind of design, no matter how much good practice, knowledge and implementation mastery we throw at it.
Don’t forget, design comes first.

03 January 2017

Detect object movement

    Here is our small surveillance camera software. The application uses the camera of the device. The user positions the device to a position and does not have to look all of the time if an event in the camera view happens, when something in the image changes (there is activity on the field that the camera is observing it draws a rectangle on the screen where something has changed or is changing and plays a sound). 
    - feature to select what source to use - Camera; Video file; Sequence of images.
    - feature to not show movements due to camera movement - image stabilization.
    - feature to play a sound when there is a difference detected

    The libraries that we considered for this projects are listed below. Initially we wanted to use Marvin framework, but it uses Java packages that are not available in Android and porting the framework to Android would be a project of its own. That is why our next choice was Catalano Framework which can be used in an Android application without any modifications.

Marvin Image Processing Framework


JMagick - Java wrapper for ImageMagick API

Catalano Framework

OpenCV - (Open Computer Vision)

    For the image stabilization feature we considered using Phase correlation or Convolution, and decided to use a filter we are already using which is provided by the Catalano Framework - Catalano.Imaging.Filters.Difference.
We thought that it was using Convolution to calculate the difference, but after looking at the source it turns out it is not. If it was using convolution matrix it would probably have had better results in finding differences.
Instead of performing the image stabilization using the whole size of the image we use a subset of the image. We get part of the image and another slightly bigger part of the image and calculate differences with increasing offsets to find the smallest difference and calculate the offset point from it.
Would we have used Phase correlation, we would have done the same with different delays to find where the frequencies match the most.
When there is a movement in different parts of the screen - probably different objects, the whole area will be reported and not individual objects, because we decided that it is not so important to split the areas. To add this functionality, we have to just calculate changes and group them to different clusters, so that we can separate different objects in movement on the screen.
Repository of the project at GitHub: