11 October, 2015

Python Progress Bar

Was recently writing some python to process some test cases.  One long'ish testcase would benefit from a progress bar, slapped one together below.

One thing to note, progress doesn't allow going backwards, but indicates in ASCII text the current progress by writing characters and a series of backspaces.



#!/usr/bin/python
import time
import sys
 
class ProgressBar():
  barLen_=0;
  progress_=0;
  def __init__(self,N=10):
    self.barLen_=N;
    print '['+' '*self.barLen_+']',
    print '\b'*(self.barLen_+2),
  def setProgress(self,x):
    sys.stdout.flush();
    k=int(x/100.0*self.barLen_);
    if (k > self.progress_):
      update='\b'+'.'*(k-self.progress_);
      print update,
      self.progress_=k;

N=50
p=ProgressBar(N);

for i in range(0,101):
  p.setProgress(i);
  time.sleep(.05);

The result is of the form:

user@river:~$ ./pBar 
[..............                                  ] 
Enjoy.

19 September, 2015

Limiting Network Bandwidth

Occasionally it's necessary to determine the affects of reduced network bandwidth. Authoring software that works for a networked system backed by Gigabit Ethernet as well as DSL requires testing on both forms of systems.  The alternative is to test on the most capable networking system and then test lower-bandwidth systems on the same network by artificially limiting the bandwidth.

Consulting the all-knowing Google provided a couple general options: trickle and wondershaper.  While there are other utilities, I mainly focused on these two.

Trickle is a 'lightweight user bandwidth shaper', it's particular value is the ability to limit processes independently and it doesn't require superuser privileges.


$ sudo apt-get install trickle


You can limit uploads and downloads to 100 KB/s for a command such as;

$ trickle -u 100 -d 100 wget http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_1080p_stereo.ogg


Independently, you could specify alternative limits for another command.  Trickle is limited to TCP sockets so if you need to limit bandwidth for UDP sockets you'll need to look for an alternative.  Applying system-wide bandwidth, TCP in this case, supposedly is supported via trickled daemon utility but after numerous hours I gave up on getting it to work.  Regardless, I abandoned it's use not only because I couldn't get it to work but also it's limitation to TCP and applying the daemon and client command required prepending 'trickle' to the command line.  This would disqualify applying the network bandwidth constraints to network services like sshd and such without modifying each service starting the process.

Seeking a system-wide bandwidth traffic shaper brought me back to Wondershaper once again.  My first experience with wondershaper I found it didn't always work, this round I tested the usage 1-10 Mbps and I quickly found that it limited the bandwidth like expected until reaching 10 Mbps where it jumped to realized bandwidth of 40 Mbps.  Head-scratcher; working fine for bandwidths under 10Mbps but broke at >= 10Mbps.  Consulting Google again brought me to a fix proposed by 'buzzy'; https://github.com/magnific0/wondershaper/issues/2.  Modifying the wondershaper script as 'buzzy' indicated resolved the issue and I could consistently specify bandwidths 1-40 Mbps and observe realized bandwidths accordingly.


$ cat go

#!/bin/bash



for i in `seq 1 50`; do

  echo "$i Mbps"

  K=`expr $i \* 1024`

  sudo wondershaper wlan0 $K $K

  iperf -t 10 -c 192.168.1.132 2> /dev/null | grep "Mbits/sec"

  sleep 2

done

sudo wondershaper wlan0 clear



lipeltgm@river:~$ ./go 
1 Mbps
[  3]  0.0-11.2 sec  1.38 MBytes  1.03 Mbits/sec
2 Mbps
[  3]  0.0-10.7 sec  2.62 MBytes  2.06 Mbits/sec
3 Mbps
[  3]  0.0-10.4 sec  3.75 MBytes  3.02 Mbits/sec
4 Mbps
[  3]  0.0-10.4 sec  5.00 MBytes  4.04 Mbits/sec
5 Mbps
[  3]  0.0-10.5 sec  5.12 MBytes  4.11 Mbits/sec
6 Mbps
[  3]  0.0-10.3 sec  7.38 MBytes  6.00 Mbits/sec
7 Mbps
[  3]  0.0-10.3 sec  8.50 MBytes  6.95 Mbits/sec
8 Mbps
[  3]  0.0-10.3 sec  9.75 MBytes  7.97 Mbits/sec
9 Mbps
[  3]  0.0-10.2 sec  10.6 MBytes  8.76 Mbits/sec
10 Mbps
[  3]  0.0-10.1 sec  12.0 MBytes  9.94 Mbits/sec
11 Mbps
[  3]  0.0-10.1 sec  13.2 MBytes  11.0 Mbits/sec
12 Mbps
[  3]  0.0-10.2 sec  14.5 MBytes  12.0 Mbits/sec
13 Mbps
[  3]  0.0-10.2 sec  15.5 MBytes  12.8 Mbits/sec
14 Mbps
[  3]  0.0-10.3 sec  16.1 MBytes  13.2 Mbits/sec
15 Mbps
[  3]  0.0-10.1 sec  16.4 MBytes  13.6 Mbits/sec
16 Mbps
[  3]  0.0-10.1 sec  19.0 MBytes  15.8 Mbits/sec
17 Mbps
[  3]  0.0-10.1 sec  19.8 MBytes  16.4 Mbits/sec
18 Mbps
[  3]  0.0-10.1 sec  21.1 MBytes  17.6 Mbits/sec
19 Mbps
[  3]  0.0-10.1 sec  22.1 MBytes  18.3 Mbits/sec
20 Mbps
[  3]  0.0-10.1 sec  23.1 MBytes  19.3 Mbits/sec
...


While the use of WonderShaper requires superuser privileges, it limits the system network bandwidth in it's entirety. Enjoy.

12 September, 2015

Makefile Mystique

Originating in the work by Stuart Feldman at Bell Labs in 1976, the make utility has existed in a number of fashions since and is one of the most common build utilities for *nix based systems.  Despite that however, in my 16+ years of professional software development, authoring or maintaining makefiles takes on a classic game of 'not it!!' seemingly everywhere I work.

Few would argue that the utility lacks flexibility or power, the general complaint is the syntax/semantics are confusing and unmaintainable, one of the primary reasons that popular IDEs synthesize their own makefiles in an attempt to isolate the user from the pain and misery of doing it themselves.  The goal of this post isn't to complain about the utility, but instead to work through a few examples in an attempt to better understand it myself.  While authoring and maintaining a makefile may feel like a prostate exam, it's also likely as necessary as one.  In preparing for this post I referred the documentation here and I invite you to do the same.

Part of make's popularity and power is because of it's implicit rules.  Making use of these rules you'll find that like good liquor, a little goes a long way.  This is evident for example when your project utilizes C/C++.

With a simple source file and relying on implicit make rules the necessary makefile is simplistic;
$ cat main.c 
#include <stdio.h>

int main()
{
  printf("(%s:%d) main process initializing\n",__FILE__,__LINE__);
  printf("(%s:%d) main process terminating\n",__FILE__,__LINE__);
}

A single rule comprises the makefile and provides a simplistic, minimalistic build system.
$ cat Makefile 
main: main.o

Each makefile consists of 'rules' taking the form;
     target ... : prerequisites ...
     <tab> recipe
     <tab> ...

Examining the rule we find the target is defined as 'main' with a prerequisite of 'main.o'.  This simply means that in order to create 'main' the 'main.o' file must exist.  The absence of a recipe relies on the implicit rules.  This can be observed by looking at the output when running make as below;
$ make
cc    -c -o main.o main.c
cc   main.o   -o main

The existence of implicit rules comes with some disadvantages, namely it's easy to not understand what is being done for you.  Conceptually, the implicit rule that generates the object files takes the form of the prefix rule below;
$ cat Makefile 
main: main.o

.c.o:
${CC} ${CPPFLAGS} ${CFLAGS} -c $^ 

Understanding what is going on allows tailoring the behavior without explicitly defining a rule.  Note the usage of the CPPFLAGS and CFLAGS variables.  Tweaking the original makefile will allow us to add debugging info and specifying an optimization level 3 as below;

$ cat Makefile 
CFLAGS += -g -o3
main: main.o

This results in a slight difference when we run make;
$ make
cc -g -o3   -c -o main.o main.c
cc   main.o   -o main

The foundation of make is detecting changes to the prerequisites and determining when the targets need to be remade.  This can be observed by re-running make immediately after running make, the result is a notification that "'main' is up to date".  Affecting the main.c file timestamp by modifying the file or simply touching it will result  in the need for the rule to be applied once again.

$ make
cc -g -o3   -c -o main.o main.c
cc   main.o   -o main
user@kaylee:~/make.blog/C$ make
make: `main' is up to date.
user@kaylee:~/make.blog/C$ touch main.c 
user@kaylee:~/make.blog/C$ make
cc -g -o3   -c -o main.o main.c
cc   main.o   -o main

Likely, you've seen this all before, but stay with me I assure you there's more interesting things to come.

Often, it's preferred to have a target that cleans up the directory and allows building from scratch.  The convention is to name such a target clean.  Below is a modified makefile that defines a clean target that simply deletes the executable and the object files.

CFLAGS += -g -o3
main: main.o

clean:
        ${RM} main main.o

Executing 'make clean' will result in deleting main and main.o files.  Adding a file to your project can be accomplished by adding the object file to the prerequisites for main and recipe for the clean target or we can make use of pattern.  We'll do this by explicitly defining each of the C source files in a variable, then perform a list replacement substituting the *.c with *.o extensions to get our object file list.  The object file list can then be used in the target prerequisites and in the clean target recipe.  Adding a file to the SRCS variable rather than duplication in multiple locations.

$ cat Makefile 
CFLAGS += -g -o3
SRCS=main.c 
OBJS=$(subst .c,.o,${SRCS})
main: ${OBJS}

clean:
${RM} main ${OBJS}


Still however there is duplication, namely the multiple references of main, that can be addressed by a new variable definition.

$ cat Makefile 
CFLAGS += -g -o3
PROGS=main
SRCS=main.c 
OBJS=$(subst .c,.o,${SRCS})
${PROGS}: ${OBJS}

clean:
${RM} ${PROGS} ${OBJS}

Definitely on the right path, but the addition of a file requires modification to the makefile.  The wildcard expansion demonstrated in the following makefile.  The addition or removal of a file with the .c extension in the current directory will take effect in the wildcard expansion.

$ cat Makefile 
CFLAGS += -g -o3
PROGS=main
SRCS=${wildcard *.c}
OBJS=$(subst .c,.o,${SRCS})
${PROGS}: ${OBJS}

clean:
${RM} ${PROGS} ${OBJS}



Let's look at some less typical usages of make which gives us a bit more insight into the creation of targets, prerequisites, and recipes.  Imagemagick is a common utility that we'll be making use in the following examples.

We'll build up the makefile as we go, incorporating what we've learned above.  We'll be satisfying the same objectives using two forms of makefiles; one that makes use of suffix rules, one that makes use of pattern rules.

Let's begin by defining our objectives.  Suppose our project requires taking in a list of JPG files and converting each into a series of other image file formats, namely PNG, GIF, JP2 and XWD files.

Using the suffix rule syntax, the makefile can begin taking the following form;

$ cat Makefile.suffix 
.SUFFIXES:
.SUFFIXES: .jpg .png .gif .jp2 .xwd 

all: image.xwd 

.jpg.png:
${SH} convert $< $@

.png.gif:
${SH} convert $< $@

.gif.jp2:
${SH} convert $< $@

.jp2.xwd:
${SH} convert $< $@

clean:
${RM} *.gif *.jp2 *.xwd

The all target consists of the default target, the prerequisite of image.xwd.  In other words, make is complete when an up-to-date image.xwd file exists.  How it arrives at it is make magic, more precisely a series of suffix rules.  A series of prefix rules chaining is required to get to the final XWD file, each target we can kick off by explicitly specifying on the command line.  Specifying 'make -f Makefile.suffix image.png' results in firing of the .jpg.png suffix rule.  The suffix rules are chained as each must fire to arrive at the final XWD file.  Running 'make' performs this by stepping through a series of recipes; JPG => PNG => GIF => JP2 => XWD.
$ make -f Makefile.suffix
convert image.jpg image.png
convert image.png image.gif
convert image.gif image.jp2
convert image.jp2 image.xwd
rm image.jp2 image.gif image.png

Notice the final step removes intermediate files which can be preserved which can be prevented by adding ".PRECIOUS: %.jpg %.png %.gif %.jp2 %.xwd" line which tells make not to remove the intermediate files with the specified extensions.  The example is a bit fictional but done to demonstrate suffix rules and chaining.  Modifying the source file image.jpg followed by rerunning make will result in converting the new file to each of the alternative file formats.

As is, each prerequisite is generated via suffix rule chaining to completion before moving on to the next prerequisite.  In other words, if you specified image.xwd and image01.xwd the image.xwd would be generated to completion (ie. JPG => PNG => GIF => JP2 => XWD) before moving on to image01.xwd.

Meeting the same goals, let's utilize pattern rules rather than suffix rules which are somewhat dated in use.

$ cat Makefile.pattern 
.PRECIOUS: %.jpg %.png %.gif %.jp2 %.xwd

all: image.xwd

%.png:%.jpg
${SH} convert $< $@

%.gif:%.png
${SH} convert $< $@

%.jp2:%.gif
${SH} convert $< $@

%.xwd:%.jp2
${SH} convert $< $@
        
clean:
${RM} *.gif *.jp2 *.xwd

The most noteworthy difference between the target/prerequisites.  The prefix rules define a target, the pattern rule defines a target and prerequisite making the illusion of the rules being reversed.

What if you want to convert each input images to Jpgs before moving on to Gifs before moving on to Jp2s before the Xwds.  This can be done by specifying a wildcard expansion for the source files and using the substitution expression for each of the formats then specifying each of the formats in as prerequisites for the all target, as follows;
$ cat Makefile.pattern 
.PRECIOUS: %.jpg %.png %.gif %.jp2 %.xwd

SRCS=${wildcard *.jpg}
PNGS=$(subst .jpg,.png,${SRCS})
GIFS=$(subst .jpg,.gif,${SRCS})
JP2S=$(subst .jpg,.jp2,${SRCS})
XWDS=$(subst .jpg,.xwd,${SRCS})

all: ${PNGS} ${GIFS} ${JP2S} ${XWDS}

%.png:%.jpg
${SH} convert $< $@

%.gif:%.png
${SH} convert $< $@

%.jp2:%.gif
${SH} convert $< $@

%.xwd:%.jp2
${SH} convert $< $@
        
clean:
${RM} ${PNGS} ${GIFS} ${JP2S} ${XWDS}

This way, each format is fully satisfied before moving on to the next.  Perhaps less necessary for image files, more applicable for generating source files.  For example, the Protobuf message compiler allows generation of header/c++ files which also allows interdependencies between message files.  This requires all the message files to be converted to C/H files before firing the compilation, otherwise a source file may reference a header file that hasn't been created yet.


Hope you find this useful.  Good luck with your make projects!



26 May, 2014

DVD Backup on Linux

It's been years now, all my primary computers have Linux as the host operating system.  Few things can't be done in Linux, but one in particular is backing up dvd's which required the installation of VirtualBox and Windows VM.  I've been using that setup for some time to backup dvd's, with applications such as DVDFab, DvdShrink or the like.  With WinXp at EOL and Nero failing to work with Win7 VMs I finally gave it up after finding an alternative application for native Linux.

K9copy seems to scratch the itch.  Using Ubuntu 12.04, I found setting it up takes the form:


$ sudo apt-get install libdvdcss libdvdcss2
$ sudo apt-get update
$ sudo apt-get install libdvdread4
$ sudo /usr/share/doc/libdvdread4/install-css.sh
$ sudo apt-get install k9copy


http://k9copy.sourceforge.net/web/index.php/en/

Happy viewing.

03 May, 2014

Google Archive

Did you know that you can export all your Google artifacts?

https://www.google.com/takeout
Check the items you want to export and kick out a Zip file.

Cheers.

11 April, 2014

Generating Multi-Plot Real-Time Plots with Python

In my last post the real-time plotting capabilities were demonstrated, we're extending on this by showing how to generate multiple plots simultaneously.  A couple noteworthy observations, in the past post the X and Y scaling was automatically scaled after each element addition.  While you can still do this, typically for multiplots we would prefer maintaining a shared X range.  While somewhat unnecessary, I've elected to maintain a uniform Y range.



#!/usr/bin/python
from pylab import *;
import time;

def log(M):
  print "__(log) " + M;

def test02():
  plt.ion();
  fig=plt.figure(1);
  ax1=fig.add_subplot(311);
  ax2=fig.add_subplot(312);
  ax3=fig.add_subplot(313);
  l1,=ax1.plot(100,100,'r-');
  l2,=ax2.plot(100,100,'r-');
  l3,=ax3.plot(100,100,'r-');
  time.sleep(3);

  D=[];
  i=0.0;
  while (i < 50.0):
    D.append((i,sin(i),cos(i),cos(i*2)));
    T1=[x[0] for x in D];
    L1=[x[1] for x in D];
    L2=[x[2] for x in D];
    L3=[x[3] for x in D];

    l1.set_xdata(T1);
    l1.set_ydata(L1);

    l2.set_xdata(T1);
    l2.set_ydata(L2);

    l3.set_xdata(T1);
    l3.set_ydata(L3);

    ax1.set_xlim([0,50]);
    ax2.set_xlim([0,50]);
    ax3.set_xlim([0,50]);
    ax1.set_ylim([-1.5,1.5]);
    ax2.set_ylim([-1.5,1.5]);
    ax3.set_ylim([-1.5,1.5]);

    plt.draw();
    i+=0.10;
  show(block=True);

#---main---
log("main process initializing");
test02();
log("main process terminating");

Easy Peasy;

04 April, 2014

Generating Real-Time Plots with Python

In my previous post we described plotting data using MatplotLib utilities and Python.  While this may be valuable, it becomes notably more valuable when you can generate 'live' plots during run-time.  In a past employment I worked with a series of controls engineers that utilized real-time data plots to debug and develop a highly complex multi-axis weapons system and it was the first time I understood how a real-time plot of sequence of steps simplified the development effort.

Let's get started.
Unlike the previous post, let's create the data and plot it as it is generated.


#!/usr/bin/python
from pylab import *;
import time;

def log(M):
  print "__(log) " + M;

def test01():
  plt.ion();
  fig=plt.figure(1);
  ax1=fig.add_subplot(111);
  l1,=ax1.plot(100,100,'r-');

  D=[];
  i=0.0;
  while (i < 50.0):
    D.append((i,sin(i)));
    T=[x[0] for x in D];
    L=[x[1] for x in D];
    l1.set_xdata(T);
    l1.set_ydata(L);
    ax1.relim();
    ax1.autoscale_view();
    plt.draw();
    i+=0.10;
    time.sleep(1/10.0);
  show(block=True);

#---main---
log("main process initializing");
test01();
log("main process terminating");

The result is a dynamically generated plot that resembles the following;

Tie this plotting routine to a system providing run-time information via a socket, or perhaps monitoring network traffic via pcapture libraries and you've got yourself the foundation of a real-time data monitoring system.

Cheers.