Raspberry Pi Plays Ruzzle
Last summer, I experienced some truly relaxing moments, one of which included my continued enjoyment of Ruzzle on the go.
I adapted the Ruzzle Solver algorithm to write solutions directly on my smartphone. The smartphone was connected to a Raspberry Pi, which captured a screenshot and extracted all the letters and multipliers using a simple yet effective OCR I developed for this project.
Afterwards, the Raspberry Pi utilized my core solver, written in C++, to calculate all possible solutions. These solutions were then reproduced on the smartphone by simulating finger movements.
But the project didn’t stop there! I expanded it by creating a simple artificial intelligence (AI).
This AI searches for available matches and plays autonomously. If no matches are found, it requests a new random opponent. And of course the BOT is courteous, accepting invitations from other players.
To mimic human behavior, the artificial player incorporates several features:
- It uses the Beta Distribution to select good, but not overly impressive, words, thus avoiding suspicion.
- It employs instinctive human reasoning. For example, if the word being written is “bet,” the next words might be “better” or “butter,” making the final word list appear human-generated.
- It adapts its behavior based on the current round.
- Occasionally, it selects random words, ensuring it never achieves 100% accuracy.
- The number of words and points are limited, varying with each round.
- The dictionary used is a limited subset of the complete one.
This was very effective! This bot sneaked and stayed to the top 10 players in Italy without being noticed! 😬
OCR Implementation Details
The process of letter extraction begins by dividing the screenshot into 16 sub-images, one for each letter. For each sub-image, the algorithm generates a unique fingerprint summarizing its content:
- Each letter is analyzed at 81 points.
- A fingerprint is created as an 81-bit binary number.
- A ‘0’ represents a white zone (background), and a ‘1’ represents a black zone (part of the letter).
With this method, I conducted a small amount of supervised training to build a list
L, which can identify the associated letter of a given fingerprint.
During runtime, this algorithm is applied to divide the screenshot into 16 parts, identifying the letter each part represents. To address the issue of noise, which might cause
L not to have a matching fingerprint, I devised a simple similarity function:
- The function starts with an initial score of 0.
- It compares each bit in the fingerprints.
- If the i-th bits match, the score increases.
- If the i-th bits do not match, the score decreases.
- A higher score indicates greater similarity between fingerprints.
This algorithm proved effective enough to extract letters without any mistakes, producing better results than other general purposes OCR libraries like Tesseract.