During last summer, I had some good relaxing moments: I continued to play with Ruzzle, on my way!
I adapted the Ruzzle Solver algorithm to write solutions directly in my smartphone!
The smartphone is connected to a Raspberry Pi. A screenshot is made and all the letters and multiplicators are extracted with a simple but effective OCR I implemented for this project.Afterwards, the Raspberry Pi calculates all the solutions by using the core solver, written in C++. After that, all the words in the phone are drawn by simulating the finger movements in the smartphone!
But the story did not end here! I extended the project, creating a simple artificial intelligence (aka BOT).
This artificial player looks for available matches, and then plays in complete autonomy. If there aren’t matches, it requests a new random opponent. The BOT is also polite, accepting all the invitations coming from other players.
To make a player that behaves like an human, the artificial player has some features:
- It uses the Beta Distribution to choose good, but not too good, words (to avoid suspicious people).
- It applies an instinctive human reasoning: if the word that it is writing is “bet”, probably the next words will be “better” or “butter”. In this way, the final word list seems written by an human.
- It determines the current round, to change its behaviour.
- Sometimes it draws casual words. In this way it never gets 100% of accuracy.
- It limits the number of words (a casual limit, that varies with the current round).
- It limits the number of points (a casual limit, that varies with the current round).
- The dictionary is limited, only a subset of the complete one.
Some implementation details
Letters extraction with my OCR implementation
The screenshot image is divided in 16 sub-images, one for letter. For each one, the algorithm makes a fingerprint that summarises its informative content:
- It analyses 81 points for each letter.
- It builds a fingerprint as a binary number of 81 bits
- A 0 represents a white zone (the background)
- A 1 represents a black zone (the part of letter)
By using this method, I did a little supervised training, building a list
L that can return the associated letter of a fingerprint.
Now, at running time we would run this same algorithm, dividing the screenshot in 16 parts, trying to understand what letter each part represents. The problem is that, due to noises, the list
L could not have a letter with the exact fingerprint we are looking for.
To solve this problem I realised a simple similarity function that decides how much two fingerprints are similar to each other:
- An initial score of 0 is set.
- It analyses all the bits in the fingerprints.
- If the i-th bits are equal it increments the score.
- If the i-th bits are not equal it decrements the score.
- In this way, the more is the score and the more similar the fingerprints are.
This simple algorithm was effective enough for extracting letters without doing any mistake.