This weekend I wrote my first Android application: Love or Hate
The idea behind the game is very simple. When you start it the application goes online and looks at the most recent tweets containing the following phrases:
- “I love it when”
- “I hate it when”
The application now displays some user information (username/profile picture) and shows parts of the tweet. The words “love” and “hate” are removed and the user has to guess which of the two words was in the original tweet.
The backend of this is only Twitter itself. And connecting to Twitter was really easy. I’ve used JTwitter, a small API that helps searching Twitter. The code itself to find the tweets was only about 20 lines.
So that leaves me with the Android/telephone part. The first problem is the lack of Android phone. I don’t have one… But luckely when you install the Android SDK and Eclipse with the Android plugin, everything is ready, including an emulator. The first important thing I learned was to leave the emulator running. You don’t have to close the emulator to deploy a new version. This will greatly improve development speed. But as I learned when teaming up with two colleagues Ron and Jan-Kees, having an Android device is much easier still.
The next discovery was: Android isn’t at all like Swing. It mostly evolves around XML layouts. So creating a GUI feels much more like creating an XHTML web application then working with a Java GUI. Creating the GUI was much harder then I thought it would be. And this application still has a very bad GUI. It must be a disaster creating applications that run correctly fullscreen on all different devices! This is probably an area where Apple has a huge advantage offering only two different screen sizes (iPhone/iPad). After fiddling a long time I got the application looking like I could get away with it. Not great, but everything was on it.
One good thing about Android development is the use of externalized Strings. If you follow their advice and put all text in the string.xml file it becomes very easy to release the application in multiple languages. As a quick test I’ve released my application in English (default), Dutch and French.
Releasing to the Market
Of course I could have stopped here, but I wanted to release something to the Android Market. Even though the application looks ugly, it is playable and my colleagues even enjoyed playing it for a short while. The first step is creating a signed zipaligned APK file. This was very easy using the Eclipse plugin. Just click on “Export as signed Android application” and you are done. It even performs the zipalign step, in which it optimizes the APK file for the release.
To get you application in the market you need to have a Android Market account. Opening one costs 25 euro (only one time, not per application), and although I don’t think I’ll even see that return on investment, I’ve decided to do it anyway. The only thing left for me was to upload the application, including a lot of artwork (screenshots/icons/thumbnails/misc descriptions).
Adding ads from
Now that I’ve invested 25 euro’s, I decided I need some way to try and earn some money back. This actually proved to be the easiest thing in the whole project. With just 5 or 6 lines of code you can add AdMob ads in your application. You can read about it here.
After one day I’m pretty surprised: Already 350 people have downloaded the application and I’ve already earned > 50 cents from AdMob (2% ROI).
I’ve really enjoyed doing something new, and I might do more Android hobby-projects in the future. If you have any ideas, please tell me :-)
This weekend I was invited to a friends wedding. She asked if I could film everything that happened. This was basically the first time I’ve filmed something. I decided to edit everything together with some music. Here is the result:
It was a very nice day and I had a blast filming and editing everything. It is very rewarding! I really have to do this more often, but coming up with ideas to film is hard…
Yay! There is a new Virtual Source Programming Contest: “God’s Dice”
After a bit of a messy start of the contest, within hours three people had a 100% score (almost, 99.99% due to some rounding errors). This sparked James, the organizer, to change the rules a bit. Now the contest is running smoothly, the loophole is removed.
The goal of the contest is this:
- Take a cube, with all sides length 1
- Place N (9 to 88) points anywhere on the surface of the cube
- Now calculate the surface area of all triangles between your points
- Your score is: totalSurfaceArea * (smallestTriangleSurfaceArea ^ 2)
So basically you need to create triangles as large as possible, BUT, all triangles need to be large. One smaller triangle will exponentially screw up the score.
Up to now I’ve only written a scorer and a basic random searcher just to get myself on the leaderboard. Now it is time to do something a bit smarter, maybe just a better search algorithm, but I’ll probably need something a lot smarter, incorporating geometric knowledge. For example, moving one point changes all triangles involved. So we could test only these triangles to see if they improve (instead of re-testing the whole cube).
And I’ve got some other idea’s, which I won’t share obviously for the contest sake!
When browsing through the posts on DZone I noticed a couple of new Wordpress plugins. When I tried them (on my live website of course) everything failed and produced errors. And of course I didn’t do a proper backup before I started… sigh.
Anyway, I was able to login with FTP, find the broken file and fix it (PHP, blegh)! But then I started experimenting with other parts of the website and tried some different WordPress themes. I’ve had the previous theme for a couple of years now. Time for something new! What do you guys think about this new one?
The photo in the header, with the zebra’s, was taking during my holiday in South Africa last year.
Edit: The zebra picture made it look like a website of the WNF, so I changed it!
The complete (online) collection can be found here: https://picasaweb.google.com/roy.van.rijn/ZuidAfrika
For years and years I’ve also been using this avatar:
A lot of people have asked me why I’m using that avatar instead of a normal photo. I’ve been using this image since my teen years, when I had acne all over the place. I’ve just kept using it, and now people actually recognize it.
I’ve drawn that cartoon image myself, no idea what it resambles, just something unidentified creature. It was actually drawn over a photo of our family pet dog Dana. Which my parents are probably going to have to put down pretty soon. She is now very old and sick, it is just a matter of days…
Crowdflow wants to have your iPhone data, to generate pretty heatmaps!
Why am I telling you this? Because they are using my backup-data-extractor code! See their most recent blog entry.
The images they produce look very good, much better then the KML file in Google Earth!
Be sure to help them soon, because Apple is working on deleting most location-data with the next iPhone update. After that, probably the next update, they’ll encrypt the location data so it is much harder/impossible to read.
Wow, great news!
After my previous blogpost a lot of people mailed me about the usability of my ‘iPhone location data to Google Earth’ tool. It was a command-line tool, and you needed to have Java 6 installed. The main goal was to show how it is done in Java, for other developers.
The result of their effort: http://markolson.github.com/js-sqlite-map-thing/
To use it, drag the backup files into the upper part of the website. It will parse the files, locate the data and display the track on Google Maps!
The source code can be found here: https://github.com/markolson/js-sqlite-map-thing
Today I noticed the following post on Slashdot: Apple Logging Locations of All iPhone Users
And the article they are referring to can be found here
First I was amazed, how can Apple do this? But on second thought, they aren’t sending it yet to anybody, it is just something on the phone (and also on the phone’s backup). My next reaction was: I want to see my own map! But to my disappointment it only works on Apple Mac’s, not Windows/Linux.
(Re-)building it in Java
Because their code is open source and well explained on the website I decided to make my own version, in Java (cross platform!). Now, just a couple of hours since I read the article, I’ve got it working and it can be used as a base for extention. So I’d like to share the code!
Step 1: Getting the correct file
This step was the hardest part, technically. The backup Apple makes doesn’t just have all the files as they are on your iPhone. Instead they all have names like: “b4d1c79c2f5fc1da044c18d1066b80b39b575590”. They do however store the information in two files: Manifest.mbdb and Manifest.mbdx.
The MBDB file only contains information about the original files, and the MBDX file contains pointers into the MBDB file with the Hex-filenames used in the backup. So we need to parse both files to be able to connect the original filename to the Hex-filename.
Luckely somebody already did this in Python and posted it on stackoverflow. I took their idea and translated it into Java code.
When this was fully translated it could resolve all the filenames. This meant I could map the target file “Library/Caches/locationd/consolidated.db” to the hex-filename “4096c9ec676f2847dc283405900e284a7c815836”.
Step 2: Reading the SQLLite file
The contents of “4096c9ec676f2847dc283405900e284a7c815836” is just a SQLLite file. For this I decided to use SQLJet, a framework that can read(/and write) SQLLite files. Using just a couple of lines of code I had all the data I need: timestamp, latitude, longitude (and more! speed, course, confidence, altitude etc).
Step 3: What to do with the geo data?
The implementation created by Pete Warden has a nice map and timeline. But at this time I wanted to have results as fast as possible. That is why chose to just output a KML file. KML is a file format used by Google Earth and other applications.
The results are far from nice (just plain ugly) but fun: click here for a screenshot
Running the code
The above code is mainly created for developers, the resulting JAR can’t be executed. But I’m aware people will want to just have the KML file, and not go through the code. That is why I’ve uploaded an executable JAR. You can get it here: runnableiPhoneJTrack.zip (813 KB)
To run the program, extract the three JAR-files and type:
This outputs a file named iPhoneData.kml, which can be loaded for example in Google Earth.
Playing with the code
I’ve pushed my code onto GitHub: https://github.com/royvanrijn/iPhoneJTrack
Feel free to add features and send me pull requests!
Today in our project we suddenly had 10 failing unit-tests on our integration server (Hudson). Opening Hudson and looking at the first failed build I was in for a surprise. The only code changed that commit was mine!
Quickly I looked around and went into overdrive mode:
I need to fix this bug before somebody sees it!
I’m always yelling at people who break the build, and now it has happend to me…!
Looking at the SVN comment it only said: “cleaned up some code” (or something)
That isn’t very helpfull of myself…
The second scare
Then I noticed I changed only one file. The change should be contained and easy to fix. But then I had a moment of relief and scare at the same time:
- Phew, it isn’t my fault… removing empty lines should not break builds…
- But… why the heck does removing an empty line break the build!?
After looking around in the code, and the failing tests a collegue and me found the culprit: statefull static.
In our application we’ve introduced a static class called GlobalContext. In this class we keep session information, things like the MapState (we have a map, which has dimensions, view extents, etc). And throughout the code we read this information and mutate it. To unit-test the code we added setters to inject mock MapState’s into the GlobalContext.
Running all the unit tests in Eclipse always runs them synchronously, and everything is fine. But this goes horribly wrong when running Maven and/or the Hudson build. Surefire takes all test classes and runs them parallel! So when one test is busy reading the state of our static class, another test is setting different information in the same static class! This causes NPE’s and other weird values. Until now we have been lucky the tests all ran correctly, but after my commit, the execution-order appearantly changed.
The only way to fix this is removing the static-ness of the GlobalContext. Since we are also using Spring we can just put an instance of the MapState in Spring and inject that into the classes that use it. That way, when doing tests you can inject the mock-instance into the instances you are testing! This solves everything…
Even better, get rid of the GlobalContext at all. It contained a couple of instances that could easily be injected into the other objects. This makes it much clearer to see which objects depend on which other objects and makes the code a heck of a lot easier to test.
This got me thinking, in which cases should you use static? Obviously “public static final”-fields are useful. And maybe static functions in Utility classes, which have no state and only mutate data. But are there cases when you really need static state? I’ve been thinking about this for a day now, but I can’t come up with any good usecase. Do you know any? Mail me, comment below!
So for now, my new rule: Don’t use static unless you have a very good reason, and never ever have static (mutable) state!
p.s. And people, stop using SimpleDateFormat as fields, they are not threadsafe and cause weird exceptions and weird dates! I was lucky to catch one in our code today before weird errors occured.
Today somebody posted a comment:
‘Could you write a post giving more details on why you think
checked exceptions are “the embodiment of evil”?’
So here it is.
As you can see I haven’t written a post in a long time (sorry). This post is also not going to be very comprehensive. Of course I could explain everything, but I’d rather link to another site which sums it up very nicely:
Basically, I don’t want to be forced to handle exceptions. Also, having to catch checked exceptions makes code unreadable, complex and harder to change/refactor.
Of course I’m also aware that with runtime exceptions you are likely to forget to catch some exceptions, which is also very bad. That is why our application(s) catch and log all runtime exceptions and mail them to the developers. Execptions are always something very bad and should not be functional, exceptions should be… exceptions. Once they happen, all hell should break out and it should be fixed.
Follow up on the previous blogpost.
Yesterday I wrote an algorithm in Java to generate de Bruijn sequences. But I had a breakthrough after reading:
K. Cattell, F. Ruskey, J. Sawada, M. Serra, C.R. Miers, Fast algorithms to generate necklaces, unlabeled necklaces, and irreducible polynomials over GF(2)
It has an algorithm which generates Lyndon words in CAT. CAT stands for Constant Amortized Time. This means that it isn’t really constant time, but approaches it very much. Calculating the worst-case scenario isn’t worth it. For example using an ArrayList in Java. The time to do insertions is said to be O(1), and this is true…. until the array underneath needs to be resized. During this resize it’ll take some more time. So the algorithm isn’t really constant time, but constant amortized time.
This is the program I ended up with. It is very very small. One of my most beautiful pieces of code. And it even supports non-binary alphabets (use the k-parameter).
Update June 7th 2012:
I just stumbled across this interesting read: http://www.learner.org/courses/mathilluminated/units/2/textbook/07.php
The keypad example from that link can be easily calculated with the code above:
generateDeBruijnSequence(9, 5).length() = 59049 :-)
Not so long ago I encountered something called the de Bruijn sequence. For now I’ll only use this for an alphabet of (0,1), binary. But everything said here could also be applied to other alphabets. In this post I’ll describe what this sequence is, and how you can generate them, using Lyndon words.
What is a de Bruijn sequence?
Well, it is a sequence (again, in this case binary) which contains all combinations/permutations of a specific length. And it does this only once.
For example: B(2,4)
- 000100110101111 (000)
This sequence contains all possible permutations you can make in binary of length 4. The last part (000) is optional, and needed it you do not want the sequence looped. As you can see, the length of this sequence is 16. All sequences will have length 2^N.
How do you construct these sequences?
The next thing I wanted to know is how to construct these sequences. The first hit I got was from the MathWorld website. It states:
Surprisingly, it turns out that the lexicographic sequence of Lyndon words of lengths divisible by N gives the lexicographically smallest de Bruijn sequence (Ruskey).
It seems that the next step is generating Lyndon words. Lyndon words are the lexographically smallest rotation of a word. I haven’t yet found a proper way to do this (I know there are) so here is what I do:
And here is what I used to generate the smallestLyndonRotation (Java):
How does it work? Well, for example the input is: 0010.
It loops over all rotations (using some bit-masking):
And finds the smallest lexographically (0001) and returns this.
The above pseudo code will spit out the following numbers:
These are all the Lyndon words for N=4.
Splitting, reducing the Lyndon words
Now if we put the Lyndon words together we get:
- 000000010011010101111111 (our sequence)
- 0000100110101111 (de Bruijn sequence)
We aren’t quite there yet… but there is some resemblance. There is one more step left, we have to reduce/split the Lyndon words into its smallest unique part.
Lets take the Lyndon words again:
- 0000 -> 0 (4x)
- 0001 -> 0001
- 0011 -> 0011
- 0101 -> 01 (2x)
- 0111 -> 0111
- 1111 -> 1 (4x)
Results in: 0000100110101111, the sequence!
And that is it! If we’ve created the smallest possible de Bruijn sequence for B(2,4).
Here are some more sequences:
About a week ago I decided to try and write a chess engine. I’ve encountered bitboards before, and I really liked working with them. Most references I found had to do with chess engines, so I decided to have a go.
The single most important and time consuming aspect of building a chess engine is legal move generation. In all situations, be able to generate all legal moves that can be made on the board. At first this seems pretty straight forward, all pieces can move and attack in certain ways. But when you get to specific rules like castling and en-passant things get really tricky.
But how do you know for sure your engine works and gets the right results? The many chess engine developers around the world found a great solution for this problem. Something I can only describe as universal integration-tests! They call it “perft” (from performance test). The first thing they do it create a particulair situation on the chess board. This can be described like in FEN notation.
All chess engines use this small language and understand how to set up their internal board. The next step is to generate all possible moves. From the example FEN above all possible moves are just 20 moves. But why stop here? From these 20 moves calculate all legal moves the opponent can make (also 20) which results in 400 moves. Continue doing this and you’ll end up with a table like this:
As you can see, the numbers in these tables quickly become huge. This will ensure that a lot of situations are tested. Using these tables you can check if your program outputs the same values, and thus complies to all the rules. There are a lot of these tables online, using a FEN for starting position, and a table with all nodes that can be generated.
The next problem is, when your numbers don’t add up, how do you find the move 6 levels deep that goes wrong? Well, some engines can help you with that. Personally I used ROCE (Roman’s Own Chess Engine). His engine has a “divide” function. First you set the board in a certain position using the FEN, and then you call the divide function, for example “divide 6”. Now it shows a table like this:
This lists each node at level 1, and after that each number of nodes it results in at depth 6. If you also have this function in your own chess engine you can compare the numbers. Now you can pinpoint which of the first nodes contains the error. Do this move and try divide 5. This will guide you all the way to the specific move that is (or isn’t) created! This was a huge help, and I love the way chess engine developers devised a way to have a kind of universal integration-tests which will point out the most commonly made bugs. You can take these tables, load them in a automatic test and keep running them every nights to see if things are still working like it should.
How did my engine end up? Well, it worked perfect in the sense that it could generate all the good legal moves. Then I added a simple evaluation function and it could play chess. After that I implemented a simple search algorithm (alpha-beta using negamax) and it could beat two simple other chess engines and myself. And of course, after a week, I lost interest again and writing a blogpost about it is usually the nail to the coffin of most of my projects.
So, what should I tackle next? I’m looking forward to having a new AZsPCs competition, but I think this might take a while…
Update: A couple of readers have pointed out that these tests are obviously not unit tests, but rather integration or acceptance tests. You are completely right. I’ve called them Unit tests because I used JUnit and they run in the automatic build. But they do test integration..!
A couple of weeks ago our Scrum team was thinking about exception handling. We don’t use checked exceptions, since they are the embodiment of evil. So everything is translated into runtime exceptions whenever possible. But this is where the problems start: What do you do with the uncatched runtime exceptions?
Taking it one step further
Most projects will decide to just write the exceptions to the console, maybe to a log file. In some cases there is specialized software which will analyze log files, detect stacktraces and act on them. We decided to create a fast specialized solution. We want to have a mailbox where all the runtime exceptions end up, including stacktrace, system properties, program properties and even a screenshot.
Catching the uncaught exception
In Java you can set a exception handlers on Threads. You can set a handler on a Thread, but it is also possible to set a default exception handler. It is very easy:
If an exception occurs, the uncaughtException method will be called. Then you can do anything you want with the throwable. As said before, we decided it is best for us that the application phones home and sends us an email with as much information as we can get.
What to send?
Our application is launched by JNLP and will run on several clients. We don’t have complete contol over these clients, so we want to include as much information as possible to solve possible bugs in the application. The more information we have about the runtime, the easier it might be to find the problem. We grab all (relevant) system properties and program properties we can get. Including usernames, machine details.
Creating a screenshot
In Java there are actually two methods of creating a screenshot. The first method is the easiest, using the Java Robot:
Very easy, but this can contain a LOT of information about the moment the exception occurs. It may provide valuable information. But we encountered a major security problem with the above method. Java will create an actual screenshot, the size of your application. But it doesn’t know if your application is currently the active window..! When we did tests we saw screenshots with Skype messages in screen and browsers being active… We can’t invade the lives of our users and send this information over the mail!
So I decided to search for something less invasive. The main frame of our application has a paint method, and most of the time this method can still be called if we have an exception. It is possible to create our own Graphics object and let the main frame paint itself into memory. So eventually I settled on the code below:
Mailing the exception
The next step in our solution is sending an email with all the information. This is done using Java Mail, an API to send emails. More about sending emails from Java here. In our case we create a pretty HTML email with all relevant information and three attachements:
- Text file with all system properties
- Text file with the complete exception
- The image/screenshot
To create the email it is easy to create MimeBodyParts which contain the text, the only part a bit more complicated it to add the screenshot inside the email, this can be done using:
The only non-obvious line in the code above is setting the ContentID, this can be used to refer to the image-data in case of an HTML email. In the main email content I added the following line:
The src=”cid:screenshot” will cause the email to show the screenshot embedded inside the email. This makes the exception-email look very pretty.
Even more in the future…?
With our current implementation we’ll be able to detect exceptions instantly (on every new email). Also we’ll have more information then just a stacktrace, we have a screenshot and some parameters/system properties which can help us re-create the situation.
To improve this even more I’m planning on adding a simple bounded FIFO queue. This queue will contain a list of actions performed by the user. These user-actions can be added automatically by using aspect oriented programming. For example annotating all the service methods. Everytime a service method is called we can add it to the bounded queue, which has a maximum of N elements, for example 20. If an exception occurs we print our the current queue, we can read the last N actions leading up to the exception. This too can be very helpful in re-creating the situation.
What more information could be useful in an debug email like this? How do you guys solve exception handling?
Today I’ve been looking into rainbow tables. These are tables used to do a reverse lookup for a hash function. For example MD5, or Windows LAN Manager. Usually these tables are used to find passwords if the hash is known. Now I’m not looking for a method to crack somebodies computer, but the technology and algorithms involved are very advanced and might be usefull in other fields as well!
First off, lets talk about ‘hashing’, what is hashing? Well, a hash-function is a one-way function which turns some data (usually text) into a hashcode. For example… passwords:
Every good website which takes security seriously will only store these MD5 hashes in their databases, not the real passwords. So even if their database is compromised the attackers don’t have anything, because the hash-function is only one-way.
Attacking a hash
Where there is security there will be crackers trying to break it. How would you go about attacking, reversing, a hash? The earliest form was just to create huge tables of:
And then check if the hash is your hash. If you have a match, you’ll get into the system!
The problem is that these tables take a very long time to compute and you’ll end up with so much data you won’t be able to store it. This made hashing pretty safe.
A couple of generations further down the line, crackers now use rainbow tables. It takes the best of both worlds having a small(-ish) table on disk, and doing minimal computations. So how do they work..?
The reduce function
Lets assume we start with a random piece of text. For example we want to crack all passwords of length 5 and consisting of [ABC…XYZ0123456789]. We can now calculate the hash value. Instead of storing this single pair, we do a little trick. We use something that is called a ‘reduce function’. This is a selfmade one-way function that turns a hash back into a password! But not the original password (it isn’t a reverse hash-function) but just into some other password.
Why would we do that? Lets continue, we take our first random password, generate a hash, and then we reduce it back to another password. This is then again hashed, reduced, hashed, reduced for lets say 1000 times. We’ll end up with:
The trick is, we throw away everything except the first input “random” and the final hash “27aa4cbd3653a4617e0aec76ba3af9a4”! This is the only part we need to store.
Using a rainbow table
How can we use the data above to reverse hashes you might ask? To use this table we need the input hash. For example “b9322e367ad002d5adf7ca60b8b61e86”. First we check if this hash is in the database. If it is, we are very lucky, we can just re-generate that particulair chain and we know the plain text input!
If we don’t find anything (which is very likely) we apply our reduce function to the input-hash and then hash that result. Now we check the hashes again, regenerate the chain and find out the answer. This can be repeated until we hit our set limit (1000) in that case, if no match has been found, we can’t reverse it.
There is one problem in this algorithm. If you take a input-hash and then do a couple of reduce/hashes, and then you find a match… it might be a false alarm! The problem is that there might be input strings that result in the same hash. If this happens two chains will end up into one chain.
If we would have done a couple of reduce/hashes from our input and find a match for endpoint “52cafa6b5e4a6509e6ed2b8e6976d780”, the original chain might not have contained our input value. When we construct the complete chain it is possible that our input-hash isn’t in the chain…!
The ‘rainbow’ of the rainbow table
The reason they call it a rainbow table has something to do with reducing the amount of false alarms. Until now we’ve talked about having one reduce-function. What if we would have multiple reduce functions? Then we could create multiple small tables, which would help reduce a little bit.
Philippe Oechslin had a great idea. He used a different reduce algorithm for each step in the chain. So his tables are build using:
(If you color the reduce functions you’ll end up with a pretty rainbow pattern).
How would you reverse a hash using this method? Well, the first step is the same, check if your input-hash is present in the stored hashes.
Next we apply:
The big advantage is that similair hashes (collisions) will most likely use different reduce algorithms so they won’t end up in the same chain.
Using rainbow tables you greatly decrease the amount of stored values. It isn’t log(O), there is a bit of computation needed to do the lookups, but that as well is kept to a minimum. This results in a very fast method to crack passwords. But I think it can be used in other fields of computer science as well. There are a lot of situations where you’d like to have a very big hash-lookup table, and when it becomes too big, this can be used to reduce storage but maintain fairly good lookup times.
Almost every article about hashing and rainbow tables end with a short alinea about salting. You can do salting in a couple of different ways, but the idea is usually the same. The easiest form of salting is having on ‘salt’ for a complete database.
One salt per database
How does this work? Well, lets just generate a completely random piece of text: “thisisoursaltanditisverylarge”. Now every time we store a new user we do “MD5(password + salt)”. Because the password itself may be weak, we apply our large “database-salt” to it, and then we calculate the hash.
Now if you want to crack a hash in this system it is almost impossible. Unless you find out the salt, then you could re-create a complete rainbow table and crack all the passwords.
Using a user value as salt
An even better solution is to use a user-value as salt, for example their username or date of birth, or maybe their registration date/time. Now if somebody cracks the database and finds all the data they’ll have to create a new rainbow table for each seperate user (!!!). This is even more secure and preferred over the database-salt.
Just generate a random sequence…
But the single best way of salting your database is to generate a large random salt for each user. You can just store this salt in the database next to the hash of the password. This is better then, for example, the username, because there are just less collisions. For example usernames like “root” or “admin” aren’t very uncommon aren’t they? So creating a rainbow table with “root” as salt might be worth the trouble. But creating one with a large random number just for a single user? That is hard and they’ll probably quickly give up.
Other uses for this algorithm
I haven’t been able to come up with a good other use for this algorithm yet, but I have the feeling tons of problems could potentially benefit from it! Can you come up with one?
Many projects I’ve worked on, especially the projects using micro-optimization, had memory leaks and surprising performance hits. Most coders who work on for example Al Zimmermann’s programming contests use C/C++ and maybe even CUDA to get the most out of their system.
I’m usually using Java, just because I’m most at home in this language. It allows me to quickly get some working code and test some algorithms. But when I reach the final phase of the project it is time to micro-optimize everything.
The first tool I’ve tried it Java Visual VM. This tool is available in all the new Sun Oracle JVM’s. Just go to the bin-directory and type jvisualvm. This program will attach to your running code and tries to extract all the information. The good part of this tool is the price, it is free!
But if you really need to know what it going on inside your project, you should try YourKit. YourKit is kindly supporting open source projects with its full-featured Java Profiler. YourKit, LLC is the creator of innovative and intelligent tools for profiling Java and .NET applications. Take a look at YourKit’s leading software products: YourKit Java Profiler and YourKit .NET Profiler.
If you readers are interested I’ll write a blog with some simple examples on how to fix performance issues in your program, and how to detect these issues using Visual VM and YourKit.
Or I can do a comparison between the leading profilers (YourKit/JProfiler/Visual VM/etc)..?
Let me know!