Find articles from my Blog Archive:

Tuesday 24 March 2015

Demystifying Cognitive Computing

Is "Cognitive Computing" a huge computing revolution that ushers in an era of thinking machines?

If you listen to Professor Stephen Hawking and some of the press commentary, you might be forgiven for thinking that the age of science-fiction is upon us. However, the reality is a little more mundane, much less scary, but no less exciting. The science-fiction talk frequently disguises the practical reality, which is that any app developer can start to use cognitive capability for very small tasks – no matter their understanding of the technology or the depth of their pockets. It's this more down-to-earth view of the subject that I'd like to address in this post, hopefully dispelling some of the confusion and mystique that sometimes surrounds the topic.

In my university days we didn't talk about Cognitive Computing, we talked about Artificial Intelligence. The two phrases describe broadly similar concepts, but cognitive is perhaps better because it does less to conjure up images of sci-fi "thinking" machines. Instead, the subject implies a set of capabilities that are slightly less fanciful – things like identifying the subject of a passage of text, or picking out the names of people in a paragraph. It also includes more ambitious capabilities, like the ability to converse in full natural language. But the term is very broad and does not always imply things that you would necessarily think of as "intelligence".

At its most basic, cognitive computing techniques allow the analysis of data types other than the traditional structured records in a database. These might be sentences of natural language, voice recordings or images – all things that until very recently we would have considered a ‘blob’ of data, but a ‘blob’ with which we could do very little. Cognitive computing gives us the ability to peer inside the blob and to start doing interesting things with it – to parse sentences, recognise the subject of images, translate speech, etc.

It can sometimes be hard, even for humans, to understand the true meaning of ambiguous natural language, or to be absolutely certain that two photographs are of the same person. We often hear people express this in terms like “I think that’s probably Jane”.

When a computer tries to match faces, it turns out that we can be a little more specific about its confidence. By using the level of evidence found to support a suggested answer, we can calculate a probability score. This approach is very useful with many cognitive functions – images, natural language, speech, etc. We can then use this calculated probability to make decisions – for example, when IBM's Watson played the Jeopardy! quiz show, it had a threshold that ensured it didn’t answer questions when its confidence was too low – because if you answer and get the question wrong, you lose money.

Because the level of confidence in a cognitive function is really important, techniques like “machine learning” are often deployed to increase confidence over time. A machine can "learn" by taking feedback from users of the system on its accuracy. If users can tell a system when it is right or wrong, it gives that system the ability to use this feedback to adjust its confidence levels and approaches to problem solving. Or, a machine might learn in an "unsupervised" way by discovering patterns in data. Sometimes machine learning is also used to build a knowledge base by discovering data - for example, allowing a computer program to traverse links in Wikipedia and build a database of celebrities. In these ways, we can build computer systems that are more accurate.

Machine Learning hasn’t typically been needed in the past, because traditional computer applications that work on numeric and structured data are dealing with certainties – it’s not that we think 1+1 is 2, we know it is. In this context there is no need to learn and become more accurate because the system is already precise in its judgement. Some things we have done in the past – like writing a program to parse web links – have just inherited a more sophisticated label. And some things you might not immediately realise are using machine learning, are – for example, the way that google guesses your words when you type in search keywords.

Sometimes cognitive function brings about radical new types of apps – for example, Watson Oncology Advisor, which is helping to treat cancer and save lives. Some of the systems being built are very ambitious and aim to democratise knowledge by consuming large quantities of written documents that can be queried using natural language.

But cognitive function can also be used in a more bite-sized way – the capabilities being seamlessly blended into very useful, but far less ambitious, apps. For example, the popular mobile app Pocket uses cognitive services to accurately categorise and discover interesting content from the web, saving it to the user’s mobile device for later reading. The use of cognitive capabilities is both subtle and invisible to the app’s user. Nobody would think that Pocket is "thinking" or that it is revolutionary – but it is useful. It provides a streamlined experience, with articles being automatically tagged without the user needing to suggest or type those tags themselves. We don’t always need to change the world in order to exploit cognitive capabilities.

Apps like Pocket are made possible because we can deploy cognitive capabilities into a cloud and hide the underlying complexity behind a very easy-to-use developer API. They, the app builders, get to concentrate on how to use cognitive function rather than the engineering required to build cognitive function. In effect, we get to democratise access to the underlying cognitive service. This simplification for app builders is a good thing, like all simplification, because it sets our minds free from the chains of complexity to dream of new possibilities.

I notice this shift towards how to use cognitive function, rather than how to build it, all the time in my conversations around the topic – the discussions are almost exclusively in the “how can we use this”, rather than the “how does this work” camp.

Hiding cognitive behind a cloud API is particularly important, because some of the computing systems needed have a high degree of complexity. Sophisticated software architectures and unusual physical infrastructures that exploit graphics processors (GPUs) for their ability to perform high-speed parallel calculations, abound. This is a specialised area of technology and one, thankfully, that app builders do not need to worry about thanks to APIs.

The provision of cognitive services as APIs is also important because it often brings a “pay per use” charging model. Instead of large up-front investments in complex infrastructures, developers can start small and pay on a usage basis. Often the providers of these APIs offer a free tier sufficient to support the development process of a new app. “Starting small” might even mean “zero cost” in the initial stages. This low entry cost is perfect for fostering innovation and small experiments. Because the up-front costs are so low, ideas that might otherwise be strangled by red tape can be allowed to grow. And in cognitive computing this is important, because the things we are doing are often novel. The ideas need a little space to prove their worth before the full force of ROI and Business Cases are forced upon them.
“The level of computational resources required for us to get to the required scale of natural language processing functionality would be cost- prohibitive.” Jonathon Morgan of CrisisNET, on why they chose to use cognitive APIs rather than build their own natural language processing.
So the “API-ification” of cognitive computing both hides the underlying complexity of the systems and also removes the need for large up-front investments - empowering ordinary developers to start using cognitive capabilities in their apps. Rather than the topic being a complex and mysterious one, it is instead very simple. It takes just a few lines of code in any programming language to access a cognitive API, using industry standard REST API concepts and technologies.

IBM’s Watson Group is pioneering the development of cognitive APIs at the Watson Developer Cloud. There’s a lot of services now available, some perfected and some still being perfected in an open Beta programme. These APIs can be used to build new classes of cognitive apps, but they can also be used in much more subtle ways to augment and make apps just a little more natural and easy to use. But either way, you don’t need to know anything about cognitive computing, or have deep pockets, to use the APIs. Don't let industry jargon and mystique stop you from exploring the potential.


No comments :

Post a Comment