I’ve implemented few rendering techniques for the DICOM Viewer. First of all I tried to implement a simple MPR reconstruction to be able to see the three planes from a CT: axial, sagittal and coronal.

Here’s a little video showing this feature:

After that I tried to implement Z-projection rendering like MIP, MinIP, Average, etc. but I found that Canvas it’s just so slow for this purpose. So I decided to reimplement all the rendering engine using WebGL.

So I did it and I found a great speed improvement even in my old laptop.

Apart from the Z-Projection technique I implemented an enhance filter in order to sharp the image and get a better result:

HTML5 DICOM

And here you could see this last implementation in action (Please note that the bad quality is due to the codec compression while uploading to youtube):

I’m working healthcare software focused on radiology. I’ve tried many radiology images viewers, recently they’re moving from the desktop to the web but all of them have something in common: you need to install something: java, activex, flash, … So I decided to give a try to the HTML5 technologies and create a fully pluginless viewer.

HTML5 DICOM

The main problem with medical images is that they’re stored in DICOM format. Basically is a binary format which metadata refering the patient data, study parameters, … following by a 16 bit image data. Due to those 16 bits per pixel the best way to manage it is to download the whole image in DICOM format and process it directly on the client side to render the final 8 bpp image. Right now I’m developing the standard 2D manipulation tools, but I would like to try some 3D postprocessing in the future (Using mrdoob’s three.js probably).

The main drawbacks I’ve found using html5 is the differences between browsers of the webworker standard. Specially while communicating with the main thread using postMessage. I hope they (ffox) will fix this in the future :)

So long time since last post :) I tried to avoid the typical “this year i’ll make blablabla” that bloggers use to post first weeks of January, so I preferred to wait for something more interesting to say

I’m still coding for fun as usual, doing some little android stuff and so on… but if there’s something that I love as much as code or even more is music!

I use to play guitar and I wanted to record some songs with somehow quality (Sadly just recording quality but not playing :P). So I to present to myself (I love myself so much) a very nice recording pack: Presonus 1 box (Thank you for advice me).

Presonus audiobox

It cames with everything you need to record: a very nice soundcard with 2 microphone/line inputs with Phantom 48V, large condenser microphone, and monitor headphones.

Along with that hardware, it also includes a license of Studio one software which I found very very easy to use and intuitive.

Not so much to say, just recorded few test, you can try them on the music section where you could find the difference between my old microphone and the new one :)

Finally I’ve managed to finish the application I’ve working last weeks. It’s a simple metronome that I started for two reasons: I wanted to try audio programming in Android and I really needed one for my guitar lessons

First thing I did, obviously, it was to try to find any suitable metronome on the market, but as I commented in my previous post usually the android audio-apps suffer from some kind of lag. That latency maybe it’s not a real problem a game, but for a metronome it makes the application just useless.

I used some tricks I’ve used in demoscene while coding my softsynth and finally I got it working properly.

The next step after having the engine working it was to try to create a comfortable and nice looking interface (Thanks for the advices).

Perfectmetronome interface

I just finished with few details such as choosing from different soundbanks, allowing to increase/decrease automatically the BPM, allow to store tracks presets and songs probably the hardest part due to the amount of UI interface needed to manage the songs and tracks and because I was absolutly newbie on sqlite.

Here you can see a more detailed info about this application called Perfect Metronome I know… it could sounds very arrogant but I was just so excited when I got it working :)

Anyway here are the codes for the lite version (1 minute playing limitation) and the payed one (0.80€):

Following you can see four recordings of three metronomes from Android Market and Perfect Metronome. All the recording were done at 120BPM and without any other process running to see the lag effect I pointed before.

Steady tempo comparison

Android audio sucks

This love couldn’t last forever… and I just got pissed when I found out that Android audio API is competly crap.

I was playing around with the three available APIs: AudioTrack, MediaPlayer and SoundPool. My idea was to make just a simple music sequencer, with the idea to try to do some little softsynth in the future :)

So after making the first tests I got an incredible deception because of the lack of timing. I started to create a simple metronome just playing a sound continously in a loop, and I got the same results: lag from time to time.

I started to search on internet for possible solutions, maybe I was doing something completely wrong. But I didn’t found almost anything, so I decided to try to download some high ranked metronome from Android Market to check if they suffer from this kind of problem.

I got surprised when I found that all of them that I downloaded they got some delay from time to time comparing them with real metronome (Some of them got this lag earlier and It was quite easy to recognize even without comparison with real one, others got it softer and later). I even found one visual metronome with the following description: A simple metronome using flashing lamps. It does NOT click, since the effort of clicking seems to make some Android metronomes lose time.

I thought: “Ok… I think it’s time to go to NDK and do this things in native code”. Could be nice if the NDK would support audio :) but it seems they’ve just focused more in graphics.

I’ve read in many places that they’ll give NDK support for audio in Android 3.0, but noone of those comments were offical, so just cross the finger and keep praying

Regarding the current API, I’ve tried all the combinations I could: using the three APIs, creating threads for audio at high priority, creating task with handler, generating a lot of messages at fixed step time… but noone of them work enough good.

It seems that GC just appears to say hello from time to time and help with this mess.

I think that developing this kind of mobiles it looks like developing in computers some years ago where good perfomance, keeping an eye on resources and having enough stability was the main goal.

Now in desktop computers almost noone care about those things as we have enough resources and speed for everything you almost could program, and I think this philosophy has come also to this devices. It could be much better to have the native API in high perfomance language such as C or C++ for example, and then add an extra layer for those who doesn’t want to care about it but prefer more academic language programming :_)

Anyway, after crying a little bit, I go back to code :D