Mining the Old Bailey

This week on DITA we learnt about data mining and used the API Demonstrator to test data mining out as well as do some text analysis. I was excited to use the Old Bailey API to look at transcripts of some of my favourite cases. That is until I realise all my favourite cases occurred after 1913. Nevertheless, I decided to search crimes involving secondary participation.

pic1First, I used the general search bar on the website. Since secondary participation is a more recent term, I used key words such as ‘aiding’ and ‘abetting’. I was also interested in murder cases in particular and decided to search under these terms and it returned with 172 results.

I then decided to use the API demonstrator, using it to find out about the number of killings that are committed by women and that are found guilty. It’s pretty interesting because it allows you to break it down by subcategories and other aspects of the trial. It’s interesting to see how its format is similar to that of text analysis tools such as voyant.

pic 2 I later drilled it down to the fours trials concerning highway robbery and tried to export the four results but it doesn’t work immediately. I tried again when I got home when I thought there would be less traffic and it worked!

pic 4

Overall , the site worked fairly well despite the troubles with exporting the data to Voyant. I feel like this would be a particularly helpful tool for people taking cases for reference, if people wanted to go through the old law they could take old cases, run it through Voyant and analyse how often certain details of a case crop up such as who heard as well as how relevant legal concepts such as defences were relevant. Part of me wish I knew things such as distant reading, text analysis and coding while I was still doing law… It might have been easier for me to sift through and find cases relevant to my research.

While I was waiting for the API Demonstrator to export my data to Voyant, I decided to move on to the Utrecht University Digital Humanities Lab and read about the ABO (Annotated Books Online) Project which aims to understand how users in the past use their books by what they annotate in them and holds about 60 copies of annotated books.

pic5The search function allows you to search by specific terms such as language.


It searches the text by annotations and will give translations and additional annotations to them. I searched an earlier annotation that had the word ‘deer’ to test how it worked at first. Unlike Old Bailey, it highlights the annotation on the actual scanned document possibly because it is important to understand the context in which the annotation is made. It offers less search conditions compared to Old Bailey but again this could be because of the type of data both of them are collecting, for the Old Bailey it may be more useful for them to be able to sift through the data through categories such as verdict, offence, defendant’s details, etc. It may be harder to find any other way to categorise the annotations other than by language and such.


For some reason transcriptions are not on all the pages, either it is not something they aim to do or that they haven’t transcribed the annotations. You can view several books at the same time which is a nice touch however quite frustrating that it doesn’t point out where the annotations are. I wish there is an option to read through texts that have already been transcribed/annotated but hopefully in the future when there is more done on the project! It’s both interesting in context and I look forward to testing out data mining where more of its information has been digitised!


Cloud Watching

This week on DITA we learnt about distant reading and text analysis and used various online tools to analyse ext.

Distant reading is a form of reading where instead of focusing on an in-depth analysis of one text, many texts are analysed together as a dataset to understand them all. Text analysis as a form of distant reading by analysing large amounts of text for frequency of words appearing, patterns within the text and how often they are used in a particular context. There are various tools that can be used in text analysis and in our lab we tried out just a few to generate text clouds and I did it with an Altmetric report on how often articles about Gender were tweeted in Library and Information Science..

The first one is Wordle which a simple word cloud generator. It gives people the option of changing visuals such as font and colour as well as the number of words used in the cloud. At the most it is only capable of generating a visual of the words


The next one was Many-Eyes, which offers people a few more ways to visualise data besides word clouds including pie charts and graphs. However as much as I wanted to have a word cloud of of this again, it took a long time to get it to visualise one without it crashing. In terms of abilities I find it pretty similar to wordle however with the added choice when it comes to forms of visualisation. It still searches through text by frequency of appearance or alphabetically.


The final one and my personal favourite is Voyant. Voyant not only generates a word cloud but also offers many tools such as editing stop words so you can exclude words that you feel are irrelevant as well as see the number of times each word appears in the text..


Not only that, the user is able to pick and observe specific words. For example if I wanted to know how often science is a subject in the tweets then it can highlight and show where they occurred in the text as well as the context of those words. It could also compare them with different words on a chart and I compared it with the Internet as a way to see how often they appear together and where. Overall it is an effective tool for more detailed text analysis compared to the other two.