tag:blogger.com,1999:blog-77521084097760705782024-03-05T21:06:38.805-08:00Dumptruck Full of BitsBradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-7752108409776070578.post-83414450564298914122012-12-31T00:55:00.001-08:002012-12-31T00:55:08.223-08:00OpenGL musiingsI finally got a cube to render. Actually the real breakthrough was getting anything at all to render using the new GL pipeline. <br />
<br />
I recently discovered all of my GL knowledge is apparently hopelessly out of date. The old immediate mode rendering mechanisms and the transformation matrix stacks are all gone in GL3 and up. My work on trying to create cool graphical effects similar to movie style stuff, as mentioned previously has branched out seemingly endlessly. For example...<br />
<br />
<ul>
<li>Simple mouse based navigation seems pretty silly since I have a 3D mouse on my desk, so I make a foray into figuring out how to access the device in Java, and then how to translate the 6 integers it gives me back into a transformation to the view matrix. I discover that a Quaternion/Vector based camera isn't really well suited to a model where the mouse moves the Camera, as opposed to where the mouse moves the world. Still working on a better understanding of quaternions, cameras and matrices.</li>
<li>For some reason I decided it would be in my best interest to move exclusively to Ubuntu. This in turn led to no end of re-bootstrapping stuff and tweaking UIs and learning shortcuts. However I think the end result is pretty nice, though I have to say in my opinion, the Gnome-Shell 3 kicks Unity's ass so hard it's not even funny. </li>
<li>I want to have the option to do some development, or at least research on my tablet while I'm commuting, so I'm investigating AGIT, the Android Git client. Additionally I've also got a tiny Bluetooth keyboard for my tablet(s).</li>
<li>I discover that the GL rendering pipeline should be doing all transformations in user space, or they should be pushed onto the GPU via shaders, so I buckle down and start learning GLSL. Working with VAO and VBO's in Java is particularly grueling when you have no way of debugging the shaders themselves, at least not by having them do any debugging output. My Amazon wishlist is now chock full of expensive OpenGL reference books.</li>
<li>I've also discovered that SWT is apparently incompatible with GL3 in JOGL. Not sure how deep that goes. </li>
<li>Unrelated to my main pursuits, but I also spent an inordinate amount of time this weekend attempting to move a new Laptop running Windows 8 in UEFI mode to an SSD. Just incredibly painful. I finally got a handle on it when I realized I could make a Windows 8 recovery USB stick from my Wife's tablet computer. Next time, do the research before you start, or better yet, just buy the fucking laptop with the SSD. Given the value of my time, I certainly didn't save any money there.</li>
</ul>
<div>
The more I branch out the more I have to keep in mind that while I can probably fix anything, given enough time, I can't fix everything if I want to make actual progress on the things that matter to me. Case in point: I didn't really need to switch to the new GL pipeline to do the basic target I currently have for myself (replicating the Perlin noise isosurfaces from the Tron visual effects blog entry I previously linked). For several weeks it feels like I've been getting further away from actually doing that because today I'm just happy to have rendered a cube. However, the fact is that knowing GLSL and the new GL pipeline is probably going to be useful well beyond my current task so I don't mind investing some time in it. </div>
<div>
<br /></div>
<div>
On the switch to Ubuntu, I'm not sure if it was a specific need I had, or just a realization that there was no longer anything tying me to Windows. My company has finally started moving to Ubuntu for work desktops and laptops, so I've abandoned the Macbook Air in favor of a Lenovo laptop with Ubuntu on it. Also I now have a Macbook Air for sale. I also moved my work desktop from the RHEL5 distribution it had been running to the new Ubuntu distribution. This frees me from having to actually take a 'UI' machine into the office, as I'm now happy enough with the interface to use the machine directly for development (Go-Go-Gadget Gnome-Shell 3!) . I guess I figured 'Why bother having another OS at home when I could make them all homogeneous?'. No more having to remember the keyboard shortcuts for several different OS's depending on where I'm typing.</div>
<div>
<br /></div>
<div>
P.S. As far as I can tell all keyboards still suck.</div>
Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-15999621579310772532012-11-26T17:57:00.000-08:002012-11-26T18:02:18.784-08:00Open GL annoyancesI wanted to show my co-worker Ryan the work I'd done with the noise functions over the weekend, since he's also interested in graphics development. I grabbed the JOCL and JOGL libraries and downloaded the Demo code from github, set up everything in Eclipse and...<br />
<br />
Failure. The OpenCL library refuses to set up a shared buffer on my Mac. "Oh well" I though, "I can still show him on my work desktop". Spent a few minute replicating the same environment there and... <br />
<br />
More failure. The CL subsystem still refuses to create a shared buffer, though it appears that this has something to do with Mesa. <br />
<br />
Granted, neither of these machines is exactly on the bleeding edge or possesses a GPU, but they should still be able to fall back to the CPU for operations. I suppose I could try porting the noise functions out of OpenCL and into a vertex shader, but I can't really spend that kind of time showing off to my coworkers during work hours.<br />
<br />
I guess I'll try attacking the Mac side when I get home tonight if I have time. It seems to revolve around setting the CL_CGL_SHAREGROUP_KHR value on the CLGL context. Having a third party library to media the interaction with things like OpenGL and OpenCL is supposed to smooth away platform differences like this or at least report on a platform that didn't support a particular feature with a reasonable error message, one would hope.Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-24991295489677912242012-11-25T01:25:00.000-08:002012-12-31T00:25:45.615-08:00Weekend diversions, softwareSo, for whatever reason, I feel my computer UI should look cooler. Not be more functional (though hopefully not be incredibly less functional), but just look cooler. Watching movies like <a href="https://www.amazon.com/dp/B001FD5KJM/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B001FD5KJM&adid=1A0TF61Q6568M2V33MP4&">Iron Man</a>, <a href="https://www.amazon.com/dp/B004R63MWQ/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B004R63MWQ&adid=1V1XEAMV7XFMDYC1FXYP&">Tron: Legacy</a>, and <a href="http://www.amazon.com/Ghost-in-the-Shell-2-0/dp/B003AJUHKQ/ref=sr_1_3?s=instant-video&ie=UTF8&qid=1353833613&sr=1-3&keywords=ghost+in+the+shell">Ghost in the Shell</a> leaves me feeling like there's a lot of potential being wasted, if not in utility, at least in presentation.<br />
<br />
<a href="https://www.amazon.com/dp/B004R63MWQ/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B004R63MWQ&adid=1V1XEAMV7XFMDYC1FXYP&">Tron: Legacy</a> in particular has a lot of UI elements scattered about, and for all its failings in story telling, has some of my favorite art direction of any film. Happily, there's information <a href="http://jtnimoy.net/workviewer.php?q=178">out there</a> by some of the people who worked on it about how they did some of the elements in the film:<br />
<blockquote class="tr_bq">
<br />
When fixing Quorra, there was an element in the DNA interface called
the Quorra Heart which looked like a lava lamp. I generated an
isosurface from a perlin-noise volume, using the marching cubes
function found in the Geometric Tools WildMagic API, a truly wonderful
lib for coding biodigital jazz, among other jazzes. </blockquote>
I decided to see if I could approximate something like the effect shown, so I started fiddling with what tools I could find. I started by trying to get a CUDA development environment working, but even though nVidia says they support development in Eclipse, they only actually do so for OSX and Linux. If you're developing in Windows, you have to be using Visual Studio 2008 or 2010. And not Visual Studio Express either, but minimum Visual Studio Professional which retails starting at $600. I'm way more likely to reformat my machine to Linux than I am to fork over $600 to MS just to good off with visual effects, so I started looking into alternatives. Happily there are some pretty nice Java libraries for providing bindings to OpenGL and OpenCL which are usable independent of what windowing system you happen to be using. I tend to prefer SWT because even though it's a little harder to get off the ground, I feel it gives you a better experience due to the thin wrapping of native controls.<br />
<br />
I'm still pretty far away from transforming a 4D noise function into a
set of isosurfaces bounded by a sphere, but I have been able to get a 2D
point mesh to deform over time by passing it through a 3D noise
function (actually a whole set of different noise functions implemented
in OpenCL, which I found <a href="http://developer.apple.com/library/mac/#samplecode/OpenCL_Procedural_Noise_Example/Listings/noise_kernel_cl.html">here</a>). It's visible in this picture of my workspace on the top monitor, though a static image doesn't really do it justice. <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0rdIL8w6rhb6wrTSiErzyI3f_i5zM-hOeK7d648zZd1JyuykuSlN_qaMCvzk9pzJCiH9-QvvGY6viQGIwws4I_u-bPDy6ri9VBH2anymVjUGUn3EIs8Xb8YT3eJn5Wc0BDuVu66aPPonS/s1600/IMG_20121124_235758.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0rdIL8w6rhb6wrTSiErzyI3f_i5zM-hOeK7d648zZd1JyuykuSlN_qaMCvzk9pzJCiH9-QvvGY6viQGIwws4I_u-bPDy6ri9VBH2anymVjUGUn3EIs8Xb8YT3eJn5Wc0BDuVu66aPPonS/s640/IMG_20121124_235758.jpg" width="640" /></a></div>
Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-75880156295587818822012-11-25T00:48:00.001-08:002012-11-26T18:01:48.289-08:00Weekend diversions, hardwareI haven't spent much time on the music re-org lately. I've boiled the effort down to taking a chunk of artists at a time, passing them through MusicBrainz Picard to properly tag and rename them and then searching for duplicated files after the renaming.<br />
<br />
I have however redone the arrangement of computers at home. I was previously running a 23" LG 3D monitor, plus an older 17" ViewSonic 4:3 ratio monitor to the left of the main screen. I've moved the 23 inch to my wife's machine, replacing her 20 inch widescreen ViewSonic. In place of the 23" LG as my primary, I now have a <a href="https://www.amazon.com/dp/B008A3KFB8/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B008A3KFB8&adid=03B33EZ3EEDRRHMCY44Z&">27" ViewSonic LED</a> monitor. Directly above that I have a <a href="https://www.amazon.com/dp/B004KCPH84/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B004KCPH84&adid=1TYS4HCNAHFF4J4VB1NC&">24" ViewSonic LED</a>. To my left is now a <a href="https://www.amazon.com/dp/B000ECUMTS/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B000ECUMTS&adid=0893RNHYXP7PQBSFZBQV&">wall mounted laptop stand</a> which holds my work machine, and to my left is Kat's old 20 inch, rotated into portrait mode for reading and browsing long form content.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0rdIL8w6rhb6wrTSiErzyI3f_i5zM-hOeK7d648zZd1JyuykuSlN_qaMCvzk9pzJCiH9-QvvGY6viQGIwws4I_u-bPDy6ri9VBH2anymVjUGUn3EIs8Xb8YT3eJn5Wc0BDuVu66aPPonS/s1600/IMG_20121124_235758.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0rdIL8w6rhb6wrTSiErzyI3f_i5zM-hOeK7d648zZd1JyuykuSlN_qaMCvzk9pzJCiH9-QvvGY6viQGIwws4I_u-bPDy6ri9VBH2anymVjUGUn3EIs8Xb8YT3eJn5Wc0BDuVu66aPPonS/s320/IMG_20121124_235758.jpg" width="320" /></a></div>
<br />
The center monitor can serve dual duty as the primary screen for the MacBook Air or my desktop, depending on whether I'm working or not. The top screen gives me a target for graphical development, media playing, or if the primary monitor is being driven by the Mac, as a conventional aspect monitor for my desktop.<br />
<br />
Graphics cards that will drive three monitors concurrently are surprisingly hard to come by. It appears that for nVidia at least, you need a GeForce 6xx series card at least. I actually initially bought a higher end GeForce 5xx that had 3 outputs, only to find if I tried to enable the third monitor it would shut off one of the other two. I didn't feel particularly like living on the bleeding edge, so I bought a more conservative <a href="https://www.amazon.com/dp/B009KUT322/ref=as_li_ss_til?tag=lucienslibrary0d&camp=0&creative=0&linkCode=as4&creativeASIN=B009KUT322&adid=1S4BGJWBY3CN6GDJKF45&">GeForce GTX 650 TI</a> which still manages to blow the doors off my old GeForce GTX 275, which in turn was still perfectly adequate for all of my actual gaming and development needs (with the exception of triple monitor support). <br />
<br />
I'm still not completely happy with my mouse and keyboard. The mouse isn't too bad, a Logitech Performance MX, but it seems to be subject to occasional wireless interference and the mouse wheel is virtually impossible to click as a middle mouse click. The keyboard is an ancient Logitech Cordless Elite Duo (long since separated from the mouse half of the Duo). It's nearly 10 years old and has a good feed to the keys, but it's wireless receiver is a big dongle that has a now useless portion for hooking up to your PS2 keyboard port. Unfortunately I haven't find a newer Logitech keyboard that satisfies all my needs<br />
<ul>
<li>Large, easily accessible media keys, preferably with a dial for volume. If I have to hit a 'Fn' key to use a media key, then it's a non-starter</li>
<li>Keys that are at least a centimeter high and depress most of their height. I can't stand these keys that barely move.</li>
<li>The Home/End/PgUp/PgDown/Ins/Del keys need to be arranged in a 3x2 grid. Many, if not most Logitech keyboard use a 2x3 layout that has a double height delete key and no insert key.</li>
<li>No gaming keys. I have a gaming keyboard I plug in when I want to play World of Warcraft, but 90% of the time it would be taking up too much space on my keyboard tray. </li>
</ul>
I wouldn't think this would be a hard set of criteria to meet, but Logitech can't seem to do it with anything newer than the Cordless Elite Duo from 2003. The closest they come is the K350, and it has the wrong layout for the Home/End key cluster, which for some reason I just can't seem to get over. Maybe it's time to start looking at Microsoft keyboards. <br />
<br />Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-44253537883915630102012-11-12T00:07:00.000-08:002012-11-26T18:01:08.194-08:00Fixing my music... part 3So I now officially know more about ID3 tags than I ever really wanted to know. Actually I think I may have passed that mark when I did the original work on the Mp3agic library to debug why it wouldn't parse my tags. Regardless, I now know even more.<br />
<br />
Many of the files I'm trying to dedupe turn out to have exactly the same size. I have to attribute this to the likelihood that MP3 frames have some minimum size, so if the difference between two of them is only in a few frames that are under the minimum size, the total file size will be the same.<br />
<br />
I've spent the bulk of my evening trying to figure out how to meaningfully compare the ID3 tag information from two files, and it ended up involving a lot of extending of the Mp3agic functionality. The library itself seems to focus on preservation of data from the source file. For instance, all text fields can be encoded with one of four possible encodings. If the same data is in two different files with two different encodings, the library preserves that information, so if you compare the two frames, they show up as different, even though the difference is actually academic. They represent the exact same information, but happen to have been written out by two different pieces of software that each had their own idea about the best encoding to use. I have my own opinion on the best encoding to use. It's called "UTF-8 or GTFO".<br />
<br />
Additionally, a number of differences I'm seeing are attributable to the variations between sub versions of ID3v2. For instance, ID3v2.4 supports the TSOP frame type, which is for the artist name as it should be used for sorting, i.e. 'Beatles' instead of 'The Beatles'. ID3v2.3 didn't support this tag, but some programs apparently started populating a field called XSOP, which serves the same purpose. 2.3 also had a number of different fields for parts of the recording date, which have collectively been replaced by a single TDRC field which stores the full date. Mp3agic could smooth this over by supporting a normalized representation of the data which doesn't care about encoding and does its best to migrate data into a canonical internal format, but it doesn't. <br />
<br />
<br />Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com1tag:blogger.com,1999:blog-7752108409776070578.post-77282915625857240802012-11-11T19:33:00.001-08:002012-11-26T18:00:45.645-08:00Fixing my music... part 2The analysis code for organizing my music collection was fun to write. I'm using my basic environment of Java 7, Guava, and SLF4j. For serialization I'm using Jackson, and for stopwatch functionality I'm using Spring, which has to be the stupidest possible reason to use spring, but since I'm using Maven for dependency management, it amounts to adding a line in a config file that say 'use spring'.<br />
<br />
Going through 17k music files is prone to being slow, so I'm using multiple threads. I'm pretty sure I'm actually going to be bottlenecking on disk IO, since I'm hosting all these files on physical platters, not having a spare >80GB SSD to park them on. As I winnow down I may move them to the SSD so future operations are faster. Regardless, I still want the multiple threads because I'm going to be doing a lot of hash functions on each file, so I want to saturate my CPU's where I can at least. I have 8 cores (well.. 4 if you don't count hyper-threading) so I figured 6 threads wouldn't impact the responsiveness of the OS while I was running this, and it didn't.<br />
<br />
However, using multiple threads is prone to race conditions, especially since I've got code that tries to report the progress through the data every 100 items processed or so.<br />
<br />
Finally, Mp3agic, the MP3 parsing library I'm using, is a) not 100% bug free and b) further customized on my local machine from the distribution on Github. My initial work with Mp3agic was to submit some code to fix the handling of UTF-16 text data in the ID3 tags. Some IO refactoring had been done, and there was an issue with some code that was supposed to be counting characters and instead was counting bytes. This isn't normally an issue since mostly you just encounter UTF-8 with no multibyte characters. I mean, seriously, who <span style="font-size: xx-small;">*cough* Amazon *cough* </span>would put UTF-16 into the comment field of an ID3 tag if they didn't have to. So I fixed that. However the Mp3agic code is written to work against File objects, and internally does a lot of seeking with a RandomAccessFile object when it parses the file. This strikes me as a waste because I've already turned the file into a byte[] so that I can hand it to the hashing function for the whole file hash, so I rewrite the MP3 parsing code to work directly against a byte[]. But that's a non-trivial change, and I can't run the unit tests on the library because I've broken some bits of the interface that I don't feel like fixing and that are of any use to me.<br />
<br />
<br />
So, between the multiple threads, and the MP3 library instability, I want to make sure that my code is going to persist all the data it has so far every so often, and in addition, if I restart the program, it will load this checkpoint data and only work on stuff it hasn't already done. I actually didn't initially do this, but discovered a small bug in my Mp3agic changes after processing about 500 files on my first run, so I went ahead and implemented it.<br />
<br />
Of course, having done that, it went all the way through the files on the next pass. The final result is about 500 files that aren't parsable, and 5393 unique audio hashes. Of those, only 71 have only one file, and therefore aren't duped. So maybe I'll be able to move to the SSD processing pretty soon. On the other hand I'm not sure I want to arbitrarily delete all but one of each of this files. Ideally I'd like to make sure I keep the file with the most complete and accurate, but that's not really going to be easy to determine heuristically.<br />
<br />
On the other hand, the low hanging fruit is there. Fully duplicated files. There are 10944 file hashes, with 5204 of them having more than one file. So right off the back I should be able to get rid of about 5k files. Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-81238206748274044422012-11-11T18:20:00.001-08:002012-11-26T18:00:26.098-08:00Fixing my music... part 1I've been meaning to spend some time organizing my home media. My ripped movies are already pretty well organized, and I'm happy to rely on XBMC to maintain the metadata for them. However, my music files are a total disaster. There's an order of magnitude more of them, and they have self-contained metadata. There's a ton of duplication of music in various states of file name format, completeness of metadata, upgraded audio and so on. <br />
<br />
I've finally decided to tackle it. I have almost 17k MP3 files currently copied to my desktop machine, basically by taking all the collections I could find in various places and dumping them all together. There's easily going to be 66% duplication in there because I've just copied wholesale both my main storage and the results of previous attempts at organization in there, which includes a big pass with MusicBrainz Picard that only got half done, as well as the results of having pushed all my music to the Amazon cloud when they said it was going to be free, and then pulling it all back down when they changed their minds. It may go back up, but only after I've curated the hell out of it.<br />
<ol>
<li>Analysis </li>
<li>Eliminate duplication</li>
<li>Eliminate bad files</li>
<li>Bring all files up to a minimum standard regarding tagging</li>
<li>Move to some form of master storage </li>
<li>Create a standard mechanism for preventing re-duplication</li>
</ol>
Today is step 1, and hopefully some work on steps 2 and 3. Right now I'm gathering the data I'll need to dedupe the files. Because of the pre-existing efforts to improve the tagging, I can't rely on duplicates actually being identical files. In order to address this my analysis is going through all 17k files and producing an MD5 sum of the file (for low hanging fruit, duplicate-wise) as well as parsing the file with a modified version of mp3agic so that I can identify the actual MP3 audio frames and produce an MD5 hash of those specifically. I'm also looking at the MP3 ID3v2 comment field, where apparently some songs have an Amazon ID stored (presumably if I've purchased them from Amazon, but possibly if I've simply stored them there and they've been upgraded).<br />
<br />
Well, the analysis step just finished and has produced a 5 MB JSON file with the salient details. I'll start working on identifying the files I can junk, and the files I need to curate. <br />
<br />Bradley Austin Davishttp://www.blogger.com/profile/16387001483589908239noreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-56184133819914391332010-05-26T14:33:00.000-07:002012-11-26T17:59:39.947-08:00In which the hero spends his afternoon in a fruitless exercise<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">My QA dept filed a P1 bug against my recently implemented feature because the DB that's supposed to be updated in response to a JMS message wasn't being updated. Supposedly. I say supposedly because as often as not the bugs they file turn out to be a misunderstanding about what's being tested or what's supposed to happen. </span><br />
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;"><br /></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">In this specific case, a JMS message is supposed to contain an <b>offer code</b>. Offer codes are like price schedules, i.e. items with offer code <i>X</i> are .99$ until 2012, and then they become .50$ owing to the collapse of civilization, or something like that. </span><span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">But apparently offer codes can change. If civilization doesn't collapse, they may decide to keep the .99$ price until 2013. </span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;"><br /></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">We don't store prices, we just store offer codes. This is good for us because generally we don't care if the price schedule for a given product changes, since the offer code stays the same. However, one of the kinds of products is a bundle, which contains a number of other products. A bundle's price is based on the price schedules of all the products it contains. So even though we don't have to do anything to a product if its offer code's price schedule changes, we do have to recalculate the price schedules of all the bundles that contain that product and get new offer codes for them. </span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;"><br /></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">That's what this JMS message is about. It signals when an offer code has changed and therefore when we get one we have to find all the bundles that contain products with that offer code. Not the products themselves, just the bundles that contain them, and we flag them for reprocessing. </span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;"><br /></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">The bug QA filed was that they were sending a test message with a given offer code and seeing no changes in the table that lists what bundles are waiting to be repriced. I spent 2 hours on testing the code end to end to ensure that if I sent a JMS message into the queue that it was picked up and processed by the application. This was actually stupid on my part</span><span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;">*</span><span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;"> because after I was done with that and had verified everything worked fine, I looked at the test database to see what the data they were working with was. There was exactly one product with the offer code in question, and it was included in no bundles at all. The application was making no change to the database because there was nothing matching the criteria of what it was supposed to change. </span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 13px;"><br /></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 100%;"><span class="Apple-style-span" style="font-size: 13px;">* This wasn't really stupid on my part. I've had enough situations where there's both a flaw in the program <i>and</i> a flaw in QA's process that by pointing out the latter and closing the bug I just end up having to come back to it the next day.</span></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 100%;"><span class="Apple-style-span" style="font-size: 13px;"><br /></span></span></div>
<div>
<span class="Apple-style-span" style="color: #333333; font-family: 'lucida grande', tahoma, verdana, arial, sans-serif; font-size: 100%;"><span class="Apple-style-span" style="font-size: 13px;"><br /></span></span></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-37384829837396221662009-12-28T14:06:00.000-08:002010-01-08T23:41:36.270-08:00Why Avatar is a load of crapFor a supposed programming blog a disturbing number of my posts seem to be non-programming related. <br /><br />Avatar seems to be getting a lot of hype as being revolutionary or groundbreaking or changing the shape of film making. Hearing this kind of hype coming from James Cameron can be pretty existing. This is the guy who really did some pioneering work on integrating CG with live actors (The Abyss), creating full characters out of CG (Terminator 2), and creating huge digital sets and crowds (Titanic). However, on reflection, I believe that Avatar has finally gotten the better of him and ends up being Cameron's 'Phantom Menace'. The phenomenon of a director rising to the level of his incompetence isn't exactly unusual. The Wachowski's did it with their second and third Matrix movies, Bryan Singer did it with Superman Returns. I feel certain Peter Jackson's next major release will be an utter clusterfuck. But I digress.<br /><br />I defy anyone to tell me what's groundbreaking in Avatar. 3D has been done extensively before, and as its arguably easier to accomplish with an all digital scene than with a practical one, I submit that <span style="font-style: italic;">Spacehunter: Adventures in the Forbidden Zone</span> was a greater accomplishment on that front. While the environment is lush and believable, again, this is something that's been done before. What we're seeing is at best a refinement of previously demonstrated skill sets. At first I was surprised at how recognizable the actors as Na'vi looked, but even then you can look at the second Pirates of the Carribean movie and you see the same thing in the Davy Jones effects. <br /><br />A friend of mine commented that there were no Ewoks or Gungans in the movie, but once you've heard the term 'Thundersmurfs' that doesn't really hold any water with me.<br /><br />What is with Giovanni Ribisi playing these whiny bitchy background characters. Even when he actually played the lead in something (<span style="font-style: italic;">Boiler Room</span>) it felt like he was a character in a Vin Diesel movie that had surprisingly little Vin Diesel screen time. Who the hell casts Vin Diesel as a supporting character anyway? When Tom Hanks got a good look at Vin in Saving Private Ryan, I'm pretty sure the first thing he did was walk over to Spielberg and whisper 'Kill him first' in his ear. But again, I digress. <br /><br />Also, today I learned about Linear Feedback Shift Registers, which is programming related, so there.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-7752108409776070578.post-52027295843643147422009-12-14T13:44:00.000-08:002012-11-26T17:59:21.245-08:00Hibernate & DBUnitI finally got the combination of Hibernate, DBUnit, Derby, Spring and JUnit working to my satisfaction. <br />
<br />
We developed a test Jar which includes a ZIP file containing a full Derby database with all the schema and 'lookup' data contained in our production Oracle instances. This was done in order to remove the dependence on the test Oracle instances for unit testing of our ORM mappings. However, the problem remained of unit test performance. Individual unit tests classes can have 'before' and 'after' functions that are run around each and every test inside that class. They can also have 'beforeClass' and 'afterClass' functions that are executed for the entire group of tests contained within the class. However, neither of these is really suitable for the problem of decompressing a zip file and initializing a Derby embedded DB driver. Its a very expensive operation that you really only want to have happen once for the entire set of test classes.<br />
<br />
Up until recently, we got around this by having all the test classes derive from a common base class, which had a beforeClass static method that would do all the initialization. However, this actually had to be done based on the value of a static boolean variable, since that beforeClass function would be called many times (once for each derived concrete test class). In addition to the ugliness of this approach, it didn't lend itself to a mechanism for cleanup. While you can have an afterClass method, this method would be called many times for the same reason as the beforeClass method, and if you put cleanup code there you might be cleaning up the state which the next test class needs. There's no way to know if a particular call to the afterClass method is the final call. <br />
<br />
All this got fixed when I learned about the use of unit test suites. A suite is pretty much what it sounds like, a collection of unit test classes, with its own encompassing beforeClass and afterClass methods which will each be run once for the suite, regardless of how many test classes it contains. I'd seen suite usage before from time to time but never really paid it much attention, and when I looked, I found documentation on using suites in JUnit suprisingly sparse. Much of what is available is related to the JUnit 3.8.x style of unit testing, not the new annotation based testing. Further, even the 'surefire' testing plugin for Maven doesn't work well with suites unless you force a particular plugin version in your POM. <br />
<br />
Regardless, once you overcome those obstacles, the use of a Suite for pan-test class setup and cleanup ends up being valuable. <br />
<br />
For those interested the suite is annotated like this:<br />
<blockquote>
<span style="font-family: courier new;">@RunWith(org.junit.runners.Suite.class)</span><br />
<span style="font-family: courier new;">@SuiteClasses({ ContentListRetrieve.class, GalleryRetrieve.class, ImageRetrieve.class, ImageSetRetrieve.class, StoryRetrieve.class} )</span><br />
<span style="font-family: courier new;">public class SwiftCoreSuite extends SwiftBaseSuite {</span></blockquote>
Because we actually have multiple related ORM jars, we still break up the suites into base and derived classes. The SwiftBaseSuite class is responsible for initializing the spring context, decompressing the Derby database and creating a JDBC driver pointing to it. The SwiftCoreSuite class is responsible for injecting the test data for the unit tests in this particular jar and providing the suite annotations.<br />
<br />
The relevant Maven POM sections look like this:<br />
<span style="font-family: courier new;"><br /> <build></build></span><br />
<span style="font-family: courier new;"> <plugins></plugins></span><br />
<span style="font-family: courier new;"> <plugin></plugin></span><br />
<span style="font-family: courier new;"> <artifactid>maven-surefire-plugin</artifactid></span><br />
<span style="font-family: courier new;"> <version>2.4.3</version></span><br />
<span style="font-family: courier new;"> <configuration></configuration></span><br />
<span style="font-family: courier new;"> <includes></includes></span><br />
<span style="font-family: courier new;"> <include>**/*Suite.java</include></span><br />
<span style="font-family: courier new;"> </span><br />
<span style="font-family: courier new;"> </span><br />
<span style="font-family: courier new;"> </span><br />
<span style="font-family: courier new;"> </span><br />
<span style="font-family: courier new;"> </span><br />
<br />
<blockquote>
</blockquote>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-34573208419991285392009-10-21T15:31:00.000-07:002012-11-26T17:58:26.676-08:00Super Debug Fun TimeToday I spent virtually my entire day on a single bug. The bug itself was reported as a timeout error by the client accessing a web service, but I suspect that's because of a client programming error. In fact when I tried to reproduce the error I got a response from the server detailing the error, kind of. It told me what exceptions had been thrown (a runtime exception caused by a transform exception caused by a null pointer exception) but not the actual stack traces. Attempting to reproduce this by running the web service on my own machine didn't get the same result though, it worked.<br />
<br />
This was somewhat frustrating because in my experience, bugs reported against this particular web service are usually client errors and not actual bugs. That it was working on my machine but not on the test machine indicated otherwise. That still left the second most likely cause which would not involve a new release, i.e. data bugs. The machine where the failure was observed was pointing to a different database than I was in my local testing, so I switch the DB target on my laptop and tried again. Still no failure. Now we're getting into the 'crap, I have to do actual work' phase of debugging.<br />
<br />
This led to about a half hour of fighting with remote debugging. Not because its hard or anything, but because the machine where the error was occuring only seems to have one unfirewalled port and that's the one the server is running on, leaving me no way to connect my debugger to the running instance of tomcat and actually test a the same time. Finally I realized I didn't have to open a port, since I had SSH access to the machine I could just use port forwarding. Someday I'll write another blog entry entitled "Port Forwarding, or why IT has no chance of actually isolating me from the production environment, ever".<br />
<br />
Remote debugging allowed me to isolate the null pointer access fairly quickly, but since it turned out to be deep in some JDK XML formatting code, this brought me up short. Surely this isn't a JDK bug. I mean I've encountered JVM/JDK bugs before, but its extremely rare, and when you do find them, once you've isolated them to a given class, it usually only takes about 10 seconds of googling to find the actual bug report. Since I didn't find any such report this would imply an <span style="font-style: italic;">unlogged</span> JDK bug, which is so unlikely as to be pretty much impossible. Especially since the code worked on my local machine and therefore if it was a JDK bug it would almost certainly have to be one fixed between build 12 and 16 of JDK 6, that being the difference between the two machines.<br />
<br />
Going back to debugging, I decided to walk the code through the error and watch the values of the variables to try to find out when the null value that was being accessed was actually created. Stepping through the code line by line is fairly tedious, especially when you come out the other end and the error doesn't occur. Now that's weird. I tried this again, first running the code without stepping through it, and then running it again line by line. Same result: if I just ran the code it threw an exception, but if I stepped through the code line by line, it disappeared. This is known as a <span style="font-style: italic;">Heisen</span>bug after Werner Heisenberg, a pioneer in quantum physics. It refers to a bug that only appears when you aren't looking too closely at it.<br />
<br />
<br />
Stepping through the code a few more times I notice that sometimes the bug does occur. Eventually I'm able to divine that the bug only shows up if I don't actually examine the objects that are being manipulated in the debugger view. Realizing this it doesn't take too long to isolate exactly the object in question and the lines where the critical code is. In this case the critical error (the insertion of null values where there should be none) is happening in a different location from the actual exception being thrown, which is thrown when something actually tries to manipulate said null values.<br />
<br />
The sequence of events is this<br />
<ul>
<li>Document a gets created with a single parent node</li>
<li>Document b gets created which a bunch of child nodes under a single parent node</li>
<li>Document b's parent node is adopted by document a</li>
<li>Document a is rendered, throwing an exception</li>
</ul>
If you look at document b's elements before the adoption, they all look fine. If you inspect document b in the debugger and before its adopted by document a, everything runs fine. If you don't inspect document b and then try to render document a after it adopts the nodes from b, you get the error. I notice that the class name for the nodes I'm manipulating all begin with 'Deferred', as in DeferredTextNSImpl for a Text node. Further I notice that when I examine document b in the debugger its internal state changes, as if <span style="font-style: italic;">deferred</span> actions are being taken in order to do the rendering, and such actions are causing the rest of the code to work fine. Light #1 comes on. Not inspecting the document b in the debugger means those deferred actions don't take place prior to the import and thus the import happens incorrectly. I still have trouble believing this because that would definitely be a JDK bug.<br />
<br />
All of the functions involving document manipulation are part of the core JDK, and on my local machine I can debug right into them just fine, but via the remote debugger I can't. Chalking this up to the JDK difference I install the older JDK being used on my desktop. However, even after this I'm unable to duplicate the bug on my local machine. I decide to go back to remote debugging, being careful what I inspect in the debugger window, and deciding to trace into the JDK calls, hoping that having the exact same version will allow me to step into the functions. No luck though. Now I start fighting with the IDE trying to find out why it won't show me the source code for the document objects or let me trace into them. I discover oddly, that it refuses to show me the class that is being used. This is actually unprecedented. All the classes I use in the target environment should be on my classpath and even if eclipse doesn't have the source code for some of them it should always show me the exact class being used, and what jar it belongs to. Now I'm getting suspicious.<br />
<br />
I take a look at the environment where the error is occuring and look at the available libraries. They all look fine... except I suddenly notice one file 'xercesImpl-2.6.2.jar' on the target machine. Xerces is the Apache Project's XML library and in fact its the one integrated into JDK 5 and up. There shouldn't be any 'implementation' jar in the classpath at all, certainly not some file from 2005. Looking at my own debugger I notice that on my machine it isn't listed. Clearly we've found the culprit. Now to figure out the cause.<br />
<br />
We use Maven to manage dependencies and maven will tell you exactly how everything you're using got included, i.e. either directly or transitively, and from what parent dependency. Running this on my machine I see no listing for the xerces jar. I go to the machine where the project was actually built and do the same thing and there it is. We depend on XFire, which depends on Jaxen, which depends on XercesImpl, but on my machine Jaxen doesn't trigger any dependencies at all. Realizing exactly where the problem lies, I do some testing and discover we're not using the Jaxen functionality at all so I exclude it manually from the XFire dependency and do a new build, closing the bug.<br />
<br />
It looks very much like my issue is related to <a href="http://jira.codehaus.org/browse/MNG-3007">this </a>but I still can't determine why exactly my machine doesn't pick up the additional dependencies. They're listed in the POM on my local machine and both systems are running the same version of Maven. The sad thing is that in the space of several years using Maven, I've now experienced two issues related to weird dependency resolution in the space of two weeks, and almost none previously.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-81505543080494249762009-10-14T19:43:00.000-07:002010-01-08T23:41:36.316-08:00Why FlashForward is stupidThe new science-fiction show FlashFoward is built around the premise that everyone in the world falls unconscious for about 2 minutes and experiences a flash forward of their own life 9 months in the future. The description of most of the flash forwards involves relatively mundane events, but its clearly established that the events depicted are not some sort of alternate future, but are a future in which the flash forward occurred. So my question is, if all the people experienced the flash forward in their own past, why didn't anyone (or anyone we've seen so far) attempt to send themselves useful information in the past? If I knew that my past self was going to experience an excerpt of my future life, I'd probably make sure that at the appointed time I was staring at a page full of useful information.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-73805795507967057512009-10-07T14:11:00.000-07:002012-11-26T18:03:26.485-08:00Server tinkeringSo I finally got the last process off my mac mini, allowing me to retire or repurpose it. I suppose I should explain. Over the years, going back at least a decade, I've always had a server machine running at my residence, for the purposes of file serving and mail storage. For a significant chunk of that time it was actually a a publicly visible SMTP server for accepting incoming mail. Its usually been made out of whatever parts were no longer apropos for my primary computer (or that of my significant other). <br />
<br />
A few years back, when the Mac Mini came out, I decided to try it out as my wife's primary machine, mac's being all about ease of use. That lasted until she started playing World of Warcraft, which was too much for the paltry first-gen machine. Since my server machine was pretty old and out of date I decided to use the Mini as its replacement. At the time, pretty much everything I needed out of a server was covered by three things, Postfix for mail delivery, Dovecot for local mail storage and Samba for file serving. All three were available on the Mac so I spent the time needed to get them installed and working. Unfortunately, the auto-update functionality of the Mini doesn't extend to such arcane packages installed by the user, so there they have sat, largely untouched, for the past 4 years.<br />
<br />
A few months ago I was struggling with my PS3's media serving abilities which are less than stunning. The PS3 is <span style="font-style: italic;">very</span> finicky about what codecs it will and won't stream and a lot of my media was on the 'fuck you' list apparently. After struggling valiantly trying to figure out how TwonkyMedia's transcoding features were supposed to work, I finally gave up and started looking for a new solution, which I found in the aptly named <a href="http://ps3mediaserver.blogspot.com/">PS3 Media Server</a>. However, the Mini just didn't have the power to do the transcoding I wanted it to do so I decided to repurpose Kat's old laptop (really not that old, just not good enough video for the latest Warcraft expansion) into a new server running Ubuntu and all the media serving software. Partially because Samba management is easier on linux than on the Mac, and partially because serving media off a local disk is better than sending it over the wire twice, I decided to move all the file serving over to the new machine as well. <br />
<br />
At this point, only the email functionality remained on the Mini. Well yesterday I finally moved that over as well. It turns out I was fairly behind the times on my Dovecot and Postfix versions, so I had to make some config changes on each, but on the plus side my Dovecot server now requires a secure connection for email, and also supports full text search (though only on a per-folder basis).Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-7752108409776070578.post-84491576201314738862009-09-09T16:07:00.000-07:002010-01-08T23:42:55.749-08:00AI and DesireI recently saw an article about the idea that motivation is key to an effective AI. Not to be snarky, but this occured to me when I was in my teens, a couple decades ago. You can have all the heuristic abilities and fuzzy logic you can build, but until you can figure out how to program desire, your AI will probably be the worlds greatest navel starer. <br /><br />The line of thought has always made me wonder about the nature of desire in humans. Disregarding intellectual and emotional goals for the moment, consider pain and pleasure. Purely chemical in nature, how do these things work the way they do? Chemically and biologically we can say what pain is all the way from receptors in the limbs to nerve signals up the spinal cord, but in the end why should the impulses they trigger in the brain be interpreted as unpleasant while others might be pleasant. The pragmatic answer is of course 'Because of millions of years of evolutionary pressure'. But, not having millions of years to dick about with neural nets, the question is then, how do we replicate the end result in a machine intelligence?Unknownnoreply@blogger.com0