The Rise of the Machines: The Disruptive Potential of Cognitive Computing

The Rise of the Machines: The Disruptive Potential of Cognitive Computing

As cinematic representations of intelligent machines over the past decades have shown, the fascination with and the fear of artificial intelligence always inevitably mix. People enjoy the thrill of watching humans knocked off-balance by AI in Ex Machina, robots trying to take over the world in I Robot, or seeing an entire team of Marvel superheroes fighting Ultron in the latest Avengers movie. But also in real life, intelligent machines are rivalling with humans and many are afraid of automation and digitalization stealing away people’s jobs. Still, the quest for intelligent machines is relentless.


Cognitive computing

Thinking Robot — Image by © Blutgruppe/Corbis

Back in 1997 the Deep Blue computer picked grandmaster Garry Kasparov apart in a chess match. Three years ago, supercomputer Watson competed on Jeopardy! against two champions and defeated them by far. Now, Watson helps doctors make more accurate diagnoses using raw data from medical research and patient histories. In Japan, cuddly robot bears are hailed to be the future of elderly care. The ability to talk to one’s phone or tablet is not relegated to the imaginary space of films such as the science-fiction drama Her but is a reality. These and many more innovations in the field of artificial intelligence have profound implications for the relationship between man and machine.


Indeed, in our increasingly digitalized world with exponentially growing data volumes, complex issues are handled much more effectively by computers than by humans. Computers can process large volumes of data in a speed unattainable for humans. Not only is data increasing in volume but also in speed, variety and uncertainty. Most data is now supplied in unstructured forms such as images, videos, symbols and natural language – hence, computer systems needed to step up to the challenge in order to process this new kind of data. Cognitive computing aims to simulate human thought processes in a computerized model. Self-learning systems that use data mining, pattern recognition and natural language processing are trained to mimic the way the human brain works. Ultimately, cognitive computing strives to solve complex problems independently, without human assistance. According to Gartner, the era of cognitive computing, also called the smart machine era will be the most disruptive in the history of IT.


While AI capabilities such as natural language processing, speech recognition and machine learning algorithms were invented 30 years ago, it is only now that these technologies find significant application in business systems. More than 2’300 startups have been founded and venture capitalists have invested billions of dollars in the field of AI lately (a representation of the AI business landscape can be found here). Furthermore, major players like Amazon, Google, IBM, Microsoft, SAS and Yahoo are investing in the development of smarter applications.


Why now? The exponential growth of unstructured data not only offered a challenge to computer systems but also an effective means to train machines. Big data, along with improvements in the above mentioned disciplines, is what’s making the difference in machine learning. Sophisticated algorithms can only learn to solve problems independently by repeated training using big data. The success of smart applications thus depends largely on the quality of data that they are fed.


In healthcare, the finance industry, e-commerce, customer relationship management and search engines, cognitive computing is employed in order to support human experts in making faster and more accurate decisions. While machines have thus replaced human work in many fields, especially where manual work is concerned, artificial intelligence does not supersede human experts but rather acts as a catalyst. Cognitive computing systems can amplify the possibilities of what either machines or humans could do on their own. also supplies such a system in the field of employment, skills and talent. The ontology JANZZon! and the smart matching engine JANZZsme! make complex problems such as job and skills matching computable and completely change the way we think and go about job searching. As the applications of are structured semantically, that is, occupations, skills and qualifications etc. are interlinked logically; they can deliver meaningful results for complex searches for job vacancies, employees, freelancers etc. in real time, across multiple languages. Importantly, the applications are constantly fed with new data and therefore become more accurate over time. With the tools by, you don’t search for a job – you are found.


The high quality of’s tools stems from its specialization and expertise in occupation data. The ontology JANZZon! has been built with solid industry-specific expertise and years of experience in HR. Every day, a dedicated team of IT-supporters and engineers work on improving the quality and extent of the ontology JANZZon!. A myriad of connections between occupations, skills and other data stored in the knowledge base is established continuously – like synapses in a human brain – turning the unstructured occupation data into structured data. Big data is turned into smart data. The gist: Cognitive computing tools are only ever as good as the expertise of their human creators. Also the success of the supercomputers Deep Blue and Watson may be explained by looking at the specificity and quality of their training. Both were built for one particular purpose, to play chess and to compete in Jeopardy!. Also in a later stage, Watson needed to be fed with a wealth of medical research and patient histories in order to be able to supply doctors with accurate treatments. The assumption that smart applications are superhuman all-rounders is thus vehemently inaccurate.


The only ones to fear the rise of cognitive systems are those who perform menial tasks. Sure, cognitive systems can process volumes of information in real time that we couldn’t even dream of but they need to be nurtured by human experts in order to perform accurately. Hence, HR cracks and doctors need not fear their digital supporters but rather welcome their disruptive and amplifying potential.