This week on DITA we covered altmetrics, which measures the impact of articles and other scolarly documents. There are a few tools available to help understand and observe this impact and the one we used during lab was Almetric, which measures the amount of online attention an article and dataset (with a DOI) gets on social media platforms, literary review, news outlets and reference managers. This does this by using APIs and will track down the number of times it’s been referenced or linked by particular websites.
How Altmetrics work is that a person can view the number of times the blog had been linked in other sites, and would show which sites and readers it had been viewed from.
Altmetric compiles all the information on the attention received and gives a score based on the attention received. Each type of website that links the article is given a different weighting, Facebook being the lowest with 0.25 and news being the highest with a score of 8. Altmetric also will attempt to look into each mention when possible to gauge the importance of the source and how many people may reach it as well as any bias that they may have.
It also shows the demographic of the readers viewing the article, both by geography and by type of reader (member of the public, scientist, science communicators, practitioners). Type of readers is discovered by looking at keywords in their profile description and geographic location is found using geolocators.
From this I understand how altmetrics can benefit people who want to know more about the quality of the article or the reception it receives from the public. Unlike citations which only show which journals cite the article, altmetrics can show a greater view of the impact including page views, downloads and more.
However I feel that Altmetric doesn’t give enough information to determine the quality of the article at times. It doesn’t show whether the attention towards the article is positive or negative nor can it tell us anything about he actual validity of the article. Geolocation can only be used when people allow their geolocation to be known and on twitter that makes up only 1% of the users on it. It also doesn’t show us anything about the quality of the researchers using it and comparing it to older articles is difficult when older articles are more likely to receive attention due to time.
I believe altmetrics is useful in finding out more about the impact of the article however there is still so much that it cannot tell us and there is few tools available to help find out such information at this moment.
All my life I’ve loved all kinds of animals but the creatures that hold the biggest space in my heart are insects.
This may be a bit unusual seeing as most people are quite deterred from them, but I personally find them fascinating and beautiful creatures, small yet complex compared to vertebrates. Which is why when Ernesto mentioned that tweets were like butterflies in this week’s DITA lesson I found myself resonating so strongly with the concept. It opened my eyes to view tweets in a completely different and almost natural way than before.
This week we created an app used to collect and archive tweets using keywords and hashtags, we’ve used TAGS, an application created by Martin Hawsey, Twitter Search API to compile all the tweets including the tag #citylis. The exercise taught me how data could be visualised and I’ve learnt several things such as top tweeters and subjects related to them. It’s interesting to see how data could be generated using apps and I wonder how it could be used to aid further research.
When I read Ernesto’s articles on Twitter being used for public evidence and archiving and storing data sets, I started to understand the importance of twitter data being used in research especially in understand today’s trends. Tweets are already recognised to contain significant information about today’s culture and even the Library of Congress now holds an archive of tweets from 2006-2010. In regards to the ethics behind how to use twitter data, I find that scientists should have the right to use data made available to the public as they would in any public situation where data would be gathered. I think Caitlin M. Rivers’s report on ethical standards when using big data does outline how we should treat datasets while respecting privacy and it works just as most ethical standards for scientific research would.
Tweets are like butterflies in more ways than one: they’re small, numerous and contain huge amounts of data important for study, and unless scientists are able to collect samples you can not be expected to learn more about the subject that’s been studies. It has proven its use in past studies such as the one made by JISC relating to the London riots which I find this a particularly interesting example as it does dismiss original ideas relating to the use of social media using data collected from them. It’s important that we study social network data when understanding today’s culture as it plays such an integral part of people’s lives today that it would be irrational not to take it into account.
To conclude, I’ve decided to rename this blog title to lepidoterans, the taxonomic order of butterflies and moths, to respect this metaphor which joins together two things that I love dearly. I think that if tweets are butterflies than we are the information entomologists that study them!
(I’m trying a new way to organise my thoughts into words since I’ve been dissatisfied with my previous blog posts.)
This week in DITA, we’ve learnt about web services, the dynamic content you see and use on a web page, and APIs which is the interface which allows different software to interact. In this post I intend to explain the things we’ve learnt and my opinions on them.
APIs, which stand for Application Programming Interface, allows you to access date from the web server which you can later you to develop and modify programmes with. Originally this was something I struggled to understand, partially because while I was reading about it something about it felt so…incorporeal. I understood the coding languages and such because I could see the codes that create the outcome but with APIs I could not see the process of software interacting with one other or how the data was accessed. However, it is an important tool used with the creation and development of web applications. Applications can use the data collected to manipulate websites and personalise them for each individual user and this enhances web services even further!
It’s also thanks to APIs that we are able to allow different websites to interact with each other and put that data on different websites through embedding shortcodes. I feel this is a very important development because re-posting content is a huge issue in website and by embedding content such as videos and music, you are able to share them on your preferred platform and still allow views and response to the original creator which I think is very important.
This week in DITA we talked about relational databases and information retrieval and did several exercises in information retrieval. My title is just me quoting a result I had after having a bit of fun with Google’s autocorrect. I like questions, I like the sense of innocent curiosity you get from the ones searched. You really can’t find fault with even the most ridiculous of questions because in the end it is a genuine concern or curiosity a person has about the world. The main issue is how to find or provide the answers to them.
It’s interesting to see how far we’ve developed the search techniques used to search through databases despite the growing amount of information there is on the Web daily. From using structured queries to being able to input natural language in searches to find items. I think it makes its simpler and easier for most people to use unstructured queries without having to remember the specific search operators needed to find relevant answers.
However reflecting to a lab exercise we did previously where we evaluate websites what I found dissatisfying about many of the websites was how they relied on Google search to find particular pages or information on the website. In my opinion the problem with that is that without advanced search tools or even a basic understanding of search operators, it makes it difficult to filter through search results to find the exact page you need, and I feel that most people, regardless of how often they use Google, are unware of the latter.
I started wondering if there is a way to integrate search operators into people’s normal practices when entering queries, it’s something that I think most students now study when they’re young but never really continued to practice for some reason. I think it could be partially the fact that search operators, well, doesn’t come as naturally as simply inputting unstructured queries into a search. Furthermore, as much as I enjoyed using search operators, when I did the exercise I find that Google’s information retrieval has adapted so well to natural language that you can find pretty relevant answers for many general questions.
Which brings to question how far could search engines improve the quality of information retrieval? Is it up to technology to advance at this point or for people to improve their own technical knowledge to be able to use the full potential of these information retrieval tools?
Just for my own amusement, I’ve decided to end this page with the rest of my results in google:
(Is it charming enough to go on Google Poetics??)