Beta release

We have now launched a beta version of TombReader! This version is only available to a select group in the Allard Pierson museum. We will do extensive beta-testing and gather feedback to improve TombReader before releasing it to GooglePlay.

Furthermore, we wish everyone a great 2015!
beta_opening_final

Almost there

It’s been a while since the last progress report, but the pilot for TombReader is now nearly finished. We are currently preparing two quests. Both quests have been written by W.M. van Haarlem MA. and revolve around different topics. They are designed to gain a deeper understanding of the objects and their affiliation. The first quest is about Faraoh’s, Gods and Temples, while the second quest addresses rituals and customs regarding life after death.
The future of TombReader is looking bright, and we consider expanding TombReader to other museums. Further more, we would like to make it possible for anyone to create their own quest, with their own objects. But not just yet, you will have to wait a little more before we get there.
museum

TombReader design

Since last post we have created an alpha-version of TombReader which we tested on several Android devices. This months progress has been mainly on optimizing the speed of TombReader to make it run smooth, while still being able to recognize the objects in real-time. The layout of TombReader can be seen in the image belowdesign

  1. Startup screen with the option to gather addition information about the objects, or to start the Quest edition (not yet implemented).
  2. Simply point the camera to an object to start the recognition.
  3. When TombReader sees an object, but is not yet sure what kind of object it is, it will display an eye-indication. Hold the camera still to complete the recognition.
  4. When TombReader is confident about the object in front of the camera, it will display an overlay with some of the additional information.
  5. When clicking on the overlay, all of the available information is shown, including related photo’s.

The following months I’ve been given the opportunity to develop another App in collaboration with the HvA. To take this job means that I will start working on this App for the following two months and won’t be able to develop TombReader as actively as I used to. But don’t worry, TombReader will be released before Christmas!

Milestone Reached

This month one of the major milestones have been reached: being able to recognise objects with a smart phone. While it was previously only possible to recognise the object from my own laptop, I now have successfully ported all of the detection algorithms to an Android App, which can distinguish the 29 objects from the Allard Pierson museum. A simple demo has been created which indicates what type of object is in front of the camera.

The next step is to create a user interface which can display background information such as the translation.

Although the demo runs fine, I will still be working on increasing the classification speed and decreasing the size of the App to make it as light as possible!

android

Towards building the App

Since last posts, we have gathered approximately 2000 images from 29 objects in the Allard Pierson museum. The reason for collecting that many images is such that the system will be able to recognise the objects from all angles and various distances. Different from our biological ability to capture the visual essence of an object after seeing it only once, computers need more data to perform this task. They simply don’t know what the object will look like when you turn the camera a bit.

By now I can happily announce that we have decided on an algorithm to classify the objects. On our test database this algorithm was able to classify 96% of the images correctly, which is enough for the final App.

Therefore, most of the research is finished, meaning that I can now fully focus on actually building the App!build_3d

First month

It’s been one month already, and during this time I’ve collected a dataset from the objects found in the Allard Pierson museum. This dataset consists of pictures from the objects from all angles, and is used to train the computer such that it will be able to recognise these objects when you take a picture of them.

Based on the objects from the Allard Pierson museum we can conclude that it will be much more beneficial to move away from the individual hieroglyph recognition, and focus more on detecting the objects as a whole. This brings a few advantages such as a higher classification rate and real-time classification, which means that the App will recognise the object as soon as it appears in front of the camera.

Currently, one of the most promising methods for this task is based on an Artificial Neural Network, which tries to mimic the behaviour of real brain cells. Much research has been conducted in this particular field of study, as a biological brain is still the most advanced phenomenon when it comes to recognising visual input.

The main focus for now lies with training the method with the images from the Allard Pierson museum and insert all that knowledge on your smartphone!Learning

First week

As of this week, I have officially started working full time to make TombReader a reality!
First course of action: improve detection, which is the ability to localize hieroglyphs in a photo prior to knowing what the type is. This part will also take care of finding the columns along which the hieroglyphs are written, as well as finding the cartouches that depict the name of a Pharaoh. The most difficult part of hieroglyph detection is to separate clustered hieroglyphs from each other, but with the use of clever algorithms, the computer will be able to differentiate between the hieroglyphs close to each other. Here is a quick preview of the current state:

Order:
– Raw input image
– Detection output + found columns and cartouches (a cartouche is the frame around a royal name)
– Hieroglyph localisationDetectionCascade