Friday, June 6, 2014

Netflix in Ubuntu, finally (and easy)

I followed this guide which suggests to install pipelight

sudo add-apt-repository ppa:pipelight/stable
sudo apt-get update && sudo apt-get install pipelight-multi
sudo pipelight-plugin --enable silverlight
sudo pipelight-plugin --update

then, as instructed here install this firefox add-on to switch your user agent to a windows version of firefox (basically fool netflix into believing that your browser is running on windows). Select a firefox windows profile.

Friday, March 7, 2014

OpenGL drawing in ROS rviz

At work we have a bunch of legacy OpenGL code from the pre-ROS era. Fortunately for us, there is a way to reuse it in rviz. Here is a gist showing how to do it:

Basically you setup your display to create a rendering queue and then push OpenGL code in it.

For instance, I had code that would create an OpenGL rendering list. I could use the exact same code to create the list, all I had to do was to let the rviz display use the list.

It's not super easy, mostly because the process is not well documented, but certainly easier than rewriting the whole thing in native OGRE code.The code provided in the gist definitely helps.

Besides, in that particular case, the OpenGL codes is stored in binary format (I guess it was created from some CAD drawing export tool).

Tuesday, July 23, 2013

Offline data, visualizer and profiling tools help write efficient code

In the past few months I made great progress with analyzing point clouds from the velodyne sensors. One of the challenges is to write super fast algorithms. Looking back, I realize that a few things made my life easier.
  • offline data. I spent quite some time writing tools to allow playing back data recorded from the live sensor. I could play back at real time speed, or play it back as fast as possible. And the later one was really useful when I wanted to profile my code. Besides it allowed me to work from home and save a lot of commuting time.
  • a visualizer, tightly integrated with my code, so that it can access much of the internal data. With the visualizer I could look at the data, change the value of some parameters to see in real time their effect, pause, zoom, and really ask questions to my code, such as "why this point is considered foreground?"
  • a profiler. I've been using google's gperf. Besides, I have easy to use timers that I can quickly integrate in my code to measure the execution time of a particular block. Those time print to stdout the timing results, and I wrote a script to parse the result and display the stats neatly in a table (mean, standard deviation, min, max). Those timers were useful to cover for some of the short comings of the profiler (it has to be run in debug mode).

Friday, May 24, 2013

issues in profiling

I'm trying to profile some piece of code. I'm using google perftools for that. I have 4 nested loops:

1:  for( ... ) {  
2:    for( ... ) {  
3:      for( /*loop over some values of b2*/ ) {  
4:        for( vector<xxx>::const_iterator it=some_vector[b2].begin(); it!=some_vector[b2].end(); ++it ) {  
5:          do_something_heavy(it);  
6:        }  
7:      }  
8:    }  
9:  }  

I built this with debugging symbols and compiler optimization turned off. and ran it with google's profiler. Which told me that there were many samples on line 5, which I expected of course, and also many samples on line 4 which I expected less (about 500 each). Of course line 4 being the 4th nested loop, it's being called a lot, but the only thing it does is incrementing the iterator and comparing it with the end iterator.

To understand it a bit better, I broke it down and cached the end iterator:

1:  for( ... ) {  
2:    for( ... ) {  
3:      for( /*loop over some values of b2*/ ) {  
4:        const vector<xxx>::const_iterator end = some_vector[b2].end();  
5:        vector<xxx>::const_iterator it = some_vector[b2].begin();  
6:        while( it != end ) {  
7:          do_something_heavy(it);  
8:          ++it;  
9:        }  
10:      }  
11:    }  
12:  }  

and what the profiler is telling me now, is that line 6 is getting about 200 samples, and that lines 4 and 5 get almost nothing, line 8 (++it) gets 50. Meaning that most of the looping time is spent on comparing the 2 iterators...

I tried with using an integer index rather than an iterator:

1:  for( ... ) {  
2:    for( ... ) {  
3:      for( /*loop over some values of b2*/ ) {  
4:        const vector<xxx> & vector_b2 = some_vector[b2];  
5:        const unsigned N = vector_b2.size();  
6:        unsigned n = 0;  
7:        while( n != N ) {  
8:          do_something_heavy(vector_b2[n]);  
9:          ++n;  
10:        }  
11:      }  
12:    }  
13:  }  

what the profiler is telling me now is that a very small amount of samples is associated with lines other than the do_something_heavy line.


when I turn on compiler optimization on, and time the execution of the code, the 3 versions give no significant difference in execution time. Which is quite reassuring in a way.

- it sucks to have to run the profiler with the compiler optimization flags turned off
- it's good to know (or rather confirm) that iterator are not slower than indexing

If somebody knows of a reliable method to profile code, I'd be happy to hear it.

Wednesday, May 8, 2013

running a script when the network connection status changes

There a number of things I'd like to do when my network status changes: for instance use a specific .ssh/config, or a specific synergy profile. Thanks to this post, I learned that NetworkManager made that easy. The trick is to place scripts in /etc/NetworkManager/dispatcher.d/ and use a switch case approach to detect what interface (eth0, wlan0, etc.) has changed and what's the new status (up, down, etc.). This is documented in NetworkManager's man page.

Incidentally, I also learned about the logger command, which allows to write messages to the /var/log/syslog log file.

running a script on resume / suspend

I wrote a script to write my .ssh/config file according to my network location: using route I can detect which network I am running on and use that to select the appropriate ssh config options. But I had to run that script manually. Until I found a way to run a script automatically when my session is resumed.

The trick is to place scripts in /etc/pm/sleep.d. This is documented in pm-util man page.