ORIGINAL: ComputerWorld
Rohan Pearce (Computerworld)
30 July, 2013 14:30
IBM's Watson made a memorable TV debut in 2011, and some of the concepts behind it may a more deep-going impact on the world
"I expected Watson's bag of cognitive tricks to be fairly shallow, but I felt an uneasy sense of familiarity as its programmers briefed us before the big match: The computer's techniques for unraveling Jeopardy! clues sounded just like mine," game-show contestant Ken Jennings wrote after his game show loss to IBM's Watson supercomputer.
"...Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad [Rutter] and I were the first knowledge-industry workers put out of work by the new generation of 'thinking' machines."
"'Quiz show contestant' may be the first job made redundant by Watson, but I'm sure it won't be the last," Jennings concluded.
[ Get served with the latest developments on data centres and servers in Computerworld's Storage newsletter ]
Watson's 2011 victory was a publicity triumph for its creators at IBM. But according to the company, it also symbolised the birth of a new era of intelligent systems – or 'cognitive computing'. Cognitive computing, according to IBM, involves systems that interact naturally with human, learn from their experiences and generate and evaluate evidence-based hypotheses, says
Computerworld Australia caught up with Glenn Wightwick, Director of IBM Research - Australia, for a brief chat about how the technologies behind Watson have already began finding a home in businesses, as well at what distinguishes, in IBM's view, cognitive computing from past approaches to machine learning and natural language processing.
How would you distinguish the concept of 'cognitive computing' from the more general idea of artificial intelligence? Is it just a subset (or superset) of AI-related concepts?
AI is a very broad field that and one where there isn't a universally accepted definition, since people continue to discuss and debate exactly what intelligence is! Certainly there is a high degree of overlap between cognitive computing and AI in areas such as machine learning algorithms, knowledge representation, natural language processing and so on.
IBM recently received the 2013 Feigenbaum Prize (awarded to outstanding AI research which uses computer science methods) for our work on the Watson system, and we have seen a strong resurgence in the field of AI since Watson beat Ken Jennings and Brad Rutter in Jeopardy! in February, 2011.
Why are we seeing the emergence of it now? Is it just the product of accumulated software and hardware advances or a product of emerging needs, such as analysing unstructured and semi-structured data?
Many of the building blocks of cognitive computing have been around for some time. Certainly the types of problems we are starting to explore with cognitive computing rely on being able to perform an enormous amount of processing over very large volumes of data in a very short period of time, so that a cognitive system can engage with a human in a natural way, whether that is to win at Jeopardy! or to help an oncologist develop a cancer treatment plan. So the underlying technology is really important.
But the breakthrough here is the approach we have taken and the class of problems that we are tackling. Work has been done in many of these applications areas for years of course, using more traditional rules-based systems or more recently machine learning.
What we have discovered is that our approach to building cognitive systems, based on our Watson technology is yielding wonderful results in many areas. This isn't just about analysing vast quantities of structured and unstructured data.
Is it a real transformation in computing or just an novel implementation of language analysis and data processing?
What is dramatically different about how we are approaching cognitive computing based on the Watson technology is we approached natural language as a stochastic problem.
We haven't built this using classic deterministic algorithms. We do not bias on the standard rules of grammar, and we do not rely on hand-crafted (and otherwise brittle) ontologies.
Everything Watson knows about the language it knows probabilistically. We believe the technique can be applied to other aspects of human cognition (perception, foresight, investigation, etc.), but with special emphasis on the space of unstructured data (i.e. written text) that also tends to be the nesting ground of human cognition as well.
Watson learns through a combination of training via machine learning, adapting for features of the language that are new to a particular domain, and ingesting all the information it can find on the domain.
I know IBM has started offering a Watson-based customer service system and done some work with healthcare providers. Which industries do you think these kinds of technologies could have the most impact on?
Cognitive computing has applications for almost every industry where humans engage in dialogue, ask questions, test ideas and make decisions. These include healthcare, finance, education, law, government services and commerce.
We are seeing an enormous interest from organisations and have been working in healthcare and finance, as well as creating capabilities to support call centres and so on.
How about at the level of the consumer – what's the potential impact there?
A lot of our interaction with computers is still somewhat transactional. Think of the way you buy products via the Internet. Certainly, the explosion of mobile devices such as smartphones and tablets have created user interfaces that are far more powerful and intuitive. In the future, we will see cognitive computing deliver far more natural experiences where you will engage in a dialog.
In the e-commerce example, if I want to book an airline flight, I still end up doing a lot of searching for schedules and fares using my favourite travel tool or airline website.
In the future, I could actually have a conversation with a cognitive system and start out saying "I want to travel with my family to Bali in the next school holidays and I'm looking for the best value fares but I really hate red-eye flights" and the system would understand what school holidays are.
It’d come back with something like, "There are some great fares on Jetstar but you would need to take your children out of school a couple of days earlier. Do you want to do this?"
The use of Watson-derived technologies so far seems to have been quite domain-specific. But looking forward, can you imagine more generalised forms of cognitive computing?
The approach we adopted in developing Watson to play the Jeopardy! game was really quite clever. By tackling this problem, we were forced to deal with a completely open domain. There was no way you could build a rule-based system to deal with the incredible variety and complexity of the Jeopardy! questions. Hence, our approach to cognitive computing has been domain-independent from the start.
As we have applied Watson to new fields (like healthcare, for example), we are doing two things. Firstly, we have to train the system in that field by building a corpus of knowledge specific to that field. So in the case of healthcare, we would need to source medical text books, references, journals, medical databases etc.
The second thing we need to do is to integrate Watson into the workflow of the particular domain. So again, using the healthcare example, we need to understand how an oncologist works and make sure Watson has access to the necessary medical records, patient care systems, and other data elements.
What's the next frontier for cognitive computing?
Our current work in cognitive computing is yielding capabilities such as
Watson seems to still relies on quantitative advances in hardware and software development. Is this fair to say? Can you envisage the development of forms of cognitive computing that represent a rupture from current computer and software architectures?
Watson is built on state-of-the-art hardware and software technology but the underlying architecture is still von Neuman, which is the basis of almost all computers in the world today.
We are already working on new and novel technologies such as SyNAPSE which combines digital “neurons” and on-chip “synapses” in working silicon. We’re also working on memory technologies that are far more dense, including phase-change memory, atomic-scale memory and race-track memory.
Glenn Wightwick is speaking at the 31 July-1 August Wired for Wonder conference in Sydney
"...Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad [Rutter] and I were the first knowledge-industry workers put out of work by the new generation of 'thinking' machines."
"'Quiz show contestant' may be the first job made redundant by Watson, but I'm sure it won't be the last," Jennings concluded.
[ Get served with the latest developments on data centres and servers in Computerworld's Storage newsletter ]
Watson's 2011 victory was a publicity triumph for its creators at IBM. But according to the company, it also symbolised the birth of a new era of intelligent systems – or 'cognitive computing'. Cognitive computing, according to IBM, involves systems that interact naturally with human, learn from their experiences and generate and evaluate evidence-based hypotheses, says
Computerworld Australia caught up with Glenn Wightwick, Director of IBM Research - Australia, for a brief chat about how the technologies behind Watson have already began finding a home in businesses, as well at what distinguishes, in IBM's view, cognitive computing from past approaches to machine learning and natural language processing.
How would you distinguish the concept of 'cognitive computing' from the more general idea of artificial intelligence? Is it just a subset (or superset) of AI-related concepts?
AI is a very broad field that and one where there isn't a universally accepted definition, since people continue to discuss and debate exactly what intelligence is! Certainly there is a high degree of overlap between cognitive computing and AI in areas such as machine learning algorithms, knowledge representation, natural language processing and so on.
IBM recently received the 2013 Feigenbaum Prize (awarded to outstanding AI research which uses computer science methods) for our work on the Watson system, and we have seen a strong resurgence in the field of AI since Watson beat Ken Jennings and Brad Rutter in Jeopardy! in February, 2011.
Why are we seeing the emergence of it now? Is it just the product of accumulated software and hardware advances or a product of emerging needs, such as analysing unstructured and semi-structured data?
Many of the building blocks of cognitive computing have been around for some time. Certainly the types of problems we are starting to explore with cognitive computing rely on being able to perform an enormous amount of processing over very large volumes of data in a very short period of time, so that a cognitive system can engage with a human in a natural way, whether that is to win at Jeopardy! or to help an oncologist develop a cancer treatment plan. So the underlying technology is really important.
But the breakthrough here is the approach we have taken and the class of problems that we are tackling. Work has been done in many of these applications areas for years of course, using more traditional rules-based systems or more recently machine learning.
What we have discovered is that our approach to building cognitive systems, based on our Watson technology is yielding wonderful results in many areas. This isn't just about analysing vast quantities of structured and unstructured data.
Is it a real transformation in computing or just an novel implementation of language analysis and data processing?
What is dramatically different about how we are approaching cognitive computing based on the Watson technology is we approached natural language as a stochastic problem.
We haven't built this using classic deterministic algorithms. We do not bias on the standard rules of grammar, and we do not rely on hand-crafted (and otherwise brittle) ontologies.
Everything Watson knows about the language it knows probabilistically. We believe the technique can be applied to other aspects of human cognition (perception, foresight, investigation, etc.), but with special emphasis on the space of unstructured data (i.e. written text) that also tends to be the nesting ground of human cognition as well.
Watson learns through a combination of training via machine learning, adapting for features of the language that are new to a particular domain, and ingesting all the information it can find on the domain.
I know IBM has started offering a Watson-based customer service system and done some work with healthcare providers. Which industries do you think these kinds of technologies could have the most impact on?
Cognitive computing has applications for almost every industry where humans engage in dialogue, ask questions, test ideas and make decisions. These include healthcare, finance, education, law, government services and commerce.
We are seeing an enormous interest from organisations and have been working in healthcare and finance, as well as creating capabilities to support call centres and so on.
How about at the level of the consumer – what's the potential impact there?
A lot of our interaction with computers is still somewhat transactional. Think of the way you buy products via the Internet. Certainly, the explosion of mobile devices such as smartphones and tablets have created user interfaces that are far more powerful and intuitive. In the future, we will see cognitive computing deliver far more natural experiences where you will engage in a dialog.
In the e-commerce example, if I want to book an airline flight, I still end up doing a lot of searching for schedules and fares using my favourite travel tool or airline website.
In the future, I could actually have a conversation with a cognitive system and start out saying "I want to travel with my family to Bali in the next school holidays and I'm looking for the best value fares but I really hate red-eye flights" and the system would understand what school holidays are.
It’d come back with something like, "There are some great fares on Jetstar but you would need to take your children out of school a couple of days earlier. Do you want to do this?"
The use of Watson-derived technologies so far seems to have been quite domain-specific. But looking forward, can you imagine more generalised forms of cognitive computing?
The approach we adopted in developing Watson to play the Jeopardy! game was really quite clever. By tackling this problem, we were forced to deal with a completely open domain. There was no way you could build a rule-based system to deal with the incredible variety and complexity of the Jeopardy! questions. Hence, our approach to cognitive computing has been domain-independent from the start.
As we have applied Watson to new fields (like healthcare, for example), we are doing two things. Firstly, we have to train the system in that field by building a corpus of knowledge specific to that field. So in the case of healthcare, we would need to source medical text books, references, journals, medical databases etc.
The second thing we need to do is to integrate Watson into the workflow of the particular domain. So again, using the healthcare example, we need to understand how an oncologist works and make sure Watson has access to the necessary medical records, patient care systems, and other data elements.
What's the next frontier for cognitive computing?
Our current work in cognitive computing is yielding capabilities such as
- recall,
- learning,
- judgement,
- reasoning and
- inference.
Watson seems to still relies on quantitative advances in hardware and software development. Is this fair to say? Can you envisage the development of forms of cognitive computing that represent a rupture from current computer and software architectures?
Watson is built on state-of-the-art hardware and software technology but the underlying architecture is still von Neuman, which is the basis of almost all computers in the world today.
We are already working on new and novel technologies such as SyNAPSE which combines digital “neurons” and on-chip “synapses” in working silicon. We’re also working on memory technologies that are far more dense, including phase-change memory, atomic-scale memory and race-track memory.
Glenn Wightwick is speaking at the 31 July-1 August Wired for Wonder conference in Sydney
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.