The Royal Irish Academy/Acadamh Ríoga na hÉireann champions research. We identify and recognise Ireland’s world class researchers. We support scholarship and promote awareness of how science and the humanities enrich our lives and benefit society. We believe that good research needs to be promoted, sustained and communicated. The Academy is run by a Council of its members. Membership is by election and considered the highest academic honour in Ireland.

Read more about the RIA

Alan Smeaton MRIA: Computer Scientist

18 February 2021

Professor Smeaton’s current research focus is on the relationship between human memory and information finding.

Alan Smeaton MRIA, School of Computing and Insight Centre for Data Analytics, Dublin City University

I am a computer scientist by training, having completed my degrees at UCD. In my earlier years there the class was so small that we were all accommodated for lectures, and for our workspace, in our own dedicated Portacabins behind what is now the main library on the Belfield campus. The winters were cold but the camaraderie was great, and when you’re an undergraduate student with your own keys to your own cabin, that generates a spirit of independence and adventure.

My research started in the area of information retrieval; basically, matching a user’s text query against a collection of documents to find the most relevant matches, and using statistical modelling techniques for the process. Then a thing called the World Wide Web happened, and suddenly information retrieval was one of the hottest applications of computer science. Systems like Alta Vista, InfoSeek, Lycos, Excite, Inktomi, and another called Google, all launched within a few years of each other to help people find things on the web. Very quickly we saw a divide in the information retrieval research community: there were those inside some tent, working for companies like those listed above and with access to their computing resources and their data, and there were those of us who were publicly funded researchers and outside the tents, with only a small fraction of their data and computing resources. We struggled to make progress.

Despite a number of years when publicly funded researchers in information retrieval were at a severe disadvantage, however, the issue of access to data and to computational resources has now improved a lot. While there is still a divide between the research that those in large internet companies as opposed to those of us using public resources can do, most of the real innovations and progress in my research area now comes from publicly funded research rather than from within industry.

In my work these days I cover many topics and application areas because my interests are wide, but two recurring themes are the use of machine learning and the development of applications that help people to find things. I use machine learning in applications as broad as measuring the health of new-born calves by means of neck-worn accelerometers; diagnosing knee injuries from MRIs; generating video summaries of online lectures; and synthetic media generation and the detection of ‘deepfakes’—computer-manipulated images in which one person’s likeness has been used to replace that of someone else. The list goes on.

When I saw the emergence of the divide between private and public resources in text-based information retrieval, I switched my interest to applying analysis, indexing, search, summarisation and other processes to images, and then to videos. Within the last decade we have seen what could be called extreme levels of progress in these fields. We are at the point now where we can take images or videos in many genres and, using computer vision techniques, we can perform analysis and decision tasks on them that compare with, or in some cases exceed, the levels of analysis that humans can achieve. From medical diagnostics to autonomous driving, from emotion- and attention-recognition in faces to counting crowd numbers in public spaces—these are tasks for which researchers, including myself, have developed systems that can beat human levels of performance.

Three main factors have contributed to facilitating the perfect storm that has made this progress possible. One is the increase in computational power and its availability and affordable cost, and for this we have hardware engineering and software development to thank. Another is improved data availability, whereby the aforementioned improvements in data resources available in open access format, and the emergence of data challenges and benchmark activities, have been catalysts in our progress. The third and final factor has been developments in machine learning. We have moved from simple, linear regressions to complex, bio-inspired neural networks in less than a couple of decades, and developments in these areas have not just been theoretical; we have seen field testing and evaluation, and also deployments and practical implementations. Anyone and everyone with moderate computer programming skills can now easily avail of all of these advances. This means that the skillset needed to get up and running quickly using machine-learning implementation is not that high, which is why we see more and more use of artificial intelligence-based systems today.

One downside of the improvements in machine learning, in terms of the performance of techniques and their ease of use, is that many systems in widespread use today are branded as using artificial intelligence (AI), whereas when you peel away and examine what they actually do there is no real intelligence at play at all. A case in point is the system used to target personalised Facebook advertising in the 2016 US presidential election, for which Cambridge Analytica became infamous. This was branded as being ‘AI-based’, but in fact it used a simple form of linear regression, implemented in Microsoft Excel. There are many  examples of such over-reaching in terms of AI branding; in the long term this mis-use of the term ‘AI’ may come back to haunt us, as we expect intelligence but in fact get simple processing.

With regard to helping people to find things, I believe that technology in all its forms should ultimately be used to assist people to do things better, whether this involves activities of daily living, work-related tasks, leisure pursuits, interaction with others, or self-improvement. Often the search process is polluted by the presence of disinformation or misinformation, and sometimes we have trouble even formalising what kind of information we want to find. The basic ground rule that technology should always make people’s lives better has led me to ask why people actually need to search for information online in the first place. Sometimes it is genuinely to find information that we do not know, but sometimes it is to re-find information that we knew once but have now forgotten. How do we get to such positions? There is a really strong connection between our information seeking and our memory, and I find this intriguing.

I will always have an interest in the wide range of applications of machine learning—which is the fastest moving area in computing—but it is in the study of human memory, why it fails and how it fails, and how we can develop technology that can help plug that gap, that I will continue my work. My recent progress in that area is in automating the ways we can compute media memorability, such as image and video, using, yes, you've guessed it, machine-learning techniques. For that task we have best-in-class performances, which encourages me to continue to explore the relationship between human memory and information finding.

Read other Member Research Series articles

Fan ar an eolas le nuachtlitir Acadamh Ríoga na hÉireann

Sign up now