Categorieën
Geen categorie

Belastingdienst in Nederland: aangifte doen over 2011 vanaf Ubuntu

Kijk, natuurlijk ben ik reuzeblij dat ik aangifte kan doen vanaf een Linux systeem. Maar ik vind het jammer dat ze daarin nogal zijn stil blijven staan: “Het programma ‘Aangifte inkomstenbelasting 2011 voor ondernemers’ is ontwikkeld en getest op Ubuntu 8.0.4. “. Ehm joehoe, fiscus… Die “8” staat voor, jawel, 2008! En eh, het is nu 2012. Please, er is kennis genoeg in de markt om jullie te helpen om weer een beetje bij de tijd te zijn. Echt waar.

En tot die tijd bied de IRC log voor ons allen de nodige hulp:

http://irclogs.ubuntu.com/2012/04/10/%23ubuntu+1.html

De clou zit in het feit dat we tegenwoordig bijna allemaal 64-bits PC’s hebben. En de executable van de fiscus heeft 32-bits bibliotheken nodig.

Dus welke stappen moet je doorlopen? Zie hier:

  1. sudo apt-get install libc6:i386
  2. sudo apt-get install libX11:i386
  3. sudo apt-get install libxext6:i386
  4. sudo apt-get install libsm6:i386
  5. Even controleren of je alle nodige bibliotheken nu hebt: ldd bin/wa2011ux.
  6. Even kijken of er geen “not found”zaken in de uitvoer er tussen staan.
  7. Oh, en hij begint over een font te mopperen. Nu kun je dat blijkbaar met de -L of –font optie regelen.
  8. Deze werkte voor mij: ./wa2011ux –font=-Schumacher-Clean-Medium-R-Normal–12-120-75-75-C-60-KO
  9. Succes!
Categorieën
Geen categorie

Getting data out of Jira using a REST interface and Python

It is a lot easier than I expected. Have fun with it! Example code:

#!/usr/bin/python
# Author: J. Baten
# Date: 2012-04-10
import urllib, urllib2, cookielib, json
# set up cookiejar for handling URLs
cookiejar = cookielib.CookieJar()
myopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
jira_serverurl="http://server:8080/jira"
# Search using JQL
queryJQL = "updated >= -1d"
IssuesQuery = {"jql" : queryJQL,"startAt" : 0,"maxResults" : 1000 }
queryURL = jira_serverurl + "/rest/api/latest/search"
req = urllib2.Request(queryURL)
req.add_data(json.dumps(IssuesQuery))
req.add_header("Content-type", "application/json")
req.add_header("Accept", "application/json")
fp = myopener.open(req)
data = json.load(fp)
#print json.dumps(data,sort_keys=True, indent=2)
#print data["issues"]
for k in  data["issues"]:
print k["key"]
#just take the last one for testing
a=k["key"]
#now how can I get their worklogs?
# /api/2.0.alpha1/issue/{issueKey}
queryURL = jira_serverurl + "/rest/api/latest/issue/" + a
# override for testing purposes
queryURL = jira_serverurl + "/rest/api/latest/issue/ZANDBAK-15"
print queryURL
req = urllib2.Request(queryURL)
#req.add_data(json.dumps(IssuesQuery))
#req.add_header("Content-type", "application/json")
req.add_header("Accept", "application/json")
fp2 = myopener.open(req)
data2 = json.load(fp2)
print json.dumps(data2,sort_keys=True, indent=2)
print "Original estimate :" + str(data2["fields"]["timetracking"]["value"]["timeoriginalestimate"])
print "Current estimate :" + str(data2["fields"]["timetracking"]["value"]["timeestimate"])
# /api/2.0.alpha1/serverInfo
fp2.close()
fp.close()

Categorieën
Geen categorie

Linux is cool :-)

Another example why I think Linux is cool.
My discs (a raid-1 set, aka mirroring) was getting full. I added a bigger drive, marked one of the mirror discs as faulty and Linux auto-added the new drive. I did the same trick with another new drive. Next I removed the old drives.

Now I had 2 new drives in a raid-1 set with large partitions but the same old filesystem. First lets resize the raid array….

root@inzicht:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Wed Jun 27 12:10:49 2007
     Raid Level : raid1
     Array Size : 309524288 (295.19 GiB 316.95 GB)
  Used Dev Size : 309524288 (295.19 GiB 316.95 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Sep  6 13:43:14 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : b664fe5b:f5c02869:e0e19a8a:9e985100
         Events : 0.11364584

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8        3        1      active sync   /dev/sda3
root@inzicht:~# mdadm --grow /dev/md0 --size=max
root@inzicht:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Wed Jun 27 12:10:49 2007
     Raid Level : raid1
     Array Size : 973595136 (928.49 GiB 996.96 GB)
  Used Dev Size : 973595136 (928.49 GiB 996.96 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Sep  6 13:43:23 2011
          State : active, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

 Rebuild Status : 31% complete

           UUID : b664fe5b:f5c02869:e0e19a8a:9e985100
         Events : 0.11364586

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8        3        1      active sync   /dev/sda3
root@inzicht:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb3[0] sda3[1]
      973595136 blocks [2/2] [UU]
      [======>..............]  resync = 31.8% (309651456/973595136) finish=2175.1min speed=5086K/sec

unused devices: <none>
root@inzicht:~# xfs_growfs /
meta-data=/dev/md0               isize=256    agcount=16, agsize=4836317 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=77381072, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 77381072 to 243398784
root@inzicht:~#

Now what other non-unix os can beat this?

Categorieën
Geen categorie

Migrating from cvs to git

Hello,

Starting in a new job I was confronted with the use of a CVS repository. Needless to say, I wanted to migrate that to GIT. I found out there are several ways to do this, some bad, some better. I first started to use cvs2svn to convert to subversion and after that ‘git svn clone’ to do the second step. Turns out that where the first step takes approx 1,5 hours, the second step can take between 4 and 10 days! I started to use my Ubuntu workstation first and switched to some nice hardware but running CentOS 5. Unfortunately the standard cvs2svn rpm in the repo is rather old.

Finally I found the best way and it goes like this:

Get the latest cvs2git source from its project page (http://cvs2svn.tigris.org/). I used 2.3. It is a python thing. Next, analyse the script below and copy and tune it. It will make you happy (unless you are into S&M, than I would recommend doing it the hard way using the native cvs2svn rpm on a CentOS 5 system).

This script will migrate a 2 Gb CVS repo in approx 1.5 hours to a git repo. Good luck and happy hacking.

#!/bin/bash
export PYTHONPATH=/home/intrazis/jeroen/cvs-to-git-conversion/cvs2svn-bin/usr/lib/python2.4/site-packages/
export PATH=/home/intrazis/jeroen/cvs-to-git-conversion/cvs2svn-bin/usr/bin:$PATH
p=`pwd`
> cvs2git.log
date >> cvs2git.log 2>&1
echo "Resetting stuff"  >> cvs2git.log 2>&1
rm -rf cvs2git-tmp
mkdir cvs2git-tmp
#rm -rf cvsroot
rm -rf git-repo
mkdir  git-repo
# copie the cvs repo you have to this server
echo "copie repo"
scp -r  user@server:/var/cvsroot .
echo "Ready scp"  >> cvs2git.log 2>&1
date >> cvs2git.log 2>&1

##############################################################################################
#+ cvs2svn --pass=3 --retain-conflicting-attic-files --encoding=ascii --encoding=utf8 --encoding=utf16 --fallback-encoding=utf8 --dumpfile=svndump --write-symbol-info=symbol-info.txt cvsroot
#----- pass 3 (CollateSymbolsPass) -----
#Checking for forced tags with commits...
#The following paths are not disjoint:
#    Path tags/csource contains the following other paths: tags/csource/BasicHTML, tags/csource/IzInit, tags/csource/Login, tags/csource/include,
#Please fix the above errors and restart CollateSymbolsPass
#
#hacking the cvsroot files to boldly go and remove the tag 'csource' .
#  grep -R --exclude=*.gif,v "csource:" cvsroot/*
echo "hacking the cvsroot files to boldly go and remove the tag 'csource' ."  >> cvs2git.log 2>&1
for file in `grep -lR  "csource:" cvsroot/*`
do
  sed -i -e 's/csource:/csource-org:/' $file
done
echo "======================================================="  >> cvs2git.log 2>&1
date >> cvs2git.log 2>&1
##############################################################################################
# put all possible options into the cvs2git.options file. An example is available in the cvs2git python source
cvs2git --options=cvs2git.options >> cvs2git.log 2>&1
date >> cvs2git.log 2>&1
mkdir git-repo
cd  git-repo
date >> cvs2git.log 2>&1
git init  >> cvs2git.log 2>&1
date >> cvs2git.log 2>&1
#Load the dump files into the new git repository using git fast-import:
#
#git fast-import --export-marks=../cvs2svn-tmp/git-marks.dat < ../cvs2svn-tmp/git-blob.dat
#git fast-import --import-marks=../cvs2svn-tmp/git-marks.dat < ../cvs2svn-tmp/git-dump.dat #This can, of course, be shortened to: # echo "Start fast-import off dump and blob files"  >> cvs2git.log 2>&1
cat ../cvs2git-tmp/git-blob.dat ../cvs2git-tmp/git-dump.dat | git fast-import
echo "ready"  >> cvs2git.log 2>&1
date >> cvs2git.log 2>&1
cd ..
Categorieën
Geen categorie

Change of plans

Na jaren als zelfstandige en adviseur te zijn bezig geweest ben ik, via mijn netwerk, over een geweldige baan gestruikeld.

Per 1 april 2011 werk ik als hoofd produktontwikkeling in het Antoniusziekenhuis in Utrecht. Daar geef ik leiding aan 13 ontwikkelaars die zich bezig houden met het in-house ontwikkelde EPD (Electronisch Patiënten Dossier) en webtechnologie. Voor een voorproefje van ons werk kun je op http://www.intrazis.org terecht.

Dit EPD moet niet verward worden met het landelijke EPD initiatief want dat is slechts zijdelings gerelateerd.