Author: Bilal Kathrada

#Throwback – IT varsity scoops Top Prize at the 2015 IBM Youth Innovation Challenge

Director of IT varsity Maseehullah Kathrada was awarded the top prize by IBM’s judges at the 2015 IBM Youth Innovation Challenge which was held at the University of KwaZulu Natal in Durban from the 25th to the 29th of May 2015. The Youth Innovation Challenge, an initiative of ‘Innovate Durban’ in partnership with IBM, the eThekwini Municipality, the Sustainable Enterprise Development Facility (SEDF) and the Government of Flanders, focused on supporting youth driven technology businesses. The theme of the Innovate Durban- ’Hackathon’ event was ‘Smarter eThekwini’. This supports the agenda of ‘Open Government’ and ‘Open Data’ and in this way supporting engagement by the eThekwini Municipality with its citizens. The event was booked to capacity, and attracted developers and students from around the country. Nine teams participated in the Challenge, and each team was tasked with developing a technology-based solution to one major challenge facing the city and its citizens. Maseehullah was instrumental in developing WorkerBee, a social app that linked skilled artisans to potential clients. WorkerBee is like LinkedIn, but for artisans. Maseehullah, who ventured into the technological world at a very early age, is also the co-founder of Compukids and has an impressive portfolio, including that of web developer, graphic designer, online tutor, content creator and tech entrepreneur. As he reflected on this amazing moment, Maseehullah  said he was always inspired by this quotation by Steve Jobs  : “Being the richest man in the cemetery doesn’t matter to me. Going to bed at night saying we’ve done something wonderful… that’s what matters to me.”  

Part 2 of IBM predictions: AI can end world hunger

We are living in some of the most exciting times in history. Not only are we seeing science and technology advancing in unprecedented ways, but it is possible that, in the near future, we are also going to see many of the problems that have plagued us for centuries, such as food-borne diseases, world hunger and pollution, completely disappear off the face of the Earth. At this year’s IBM Think conference, researchers from the tech giant’s research facilities around the world described five technologies they are working on which they predict are going to radically transform our world in the next five years, bringing us a step closer to what we might have once considered to be a science-fiction future. Among the technologies they described are an artificial intelligence (AI) system that will help small farmers around the world optimise farmland usage, thereby increasing production; Internet of Things devices that will prevent food waste, big data systems that will protect us from bad bacteria, sensors that will be enable us to detect bacteria in food using our cellphones, and finally, new plastic recycling technology that will save the oceans. In the previous article in the series I discussed the first two technologies. In this article we will take a closer look at the next two, both of which relate to food-borne diseases. Big data will protect us from bad bacteria. Every year, around 600 million people, many of them infants, fall ill after consuming spoiled or contaminated food. Unfortunately, we have no way to prevent this. Lab testing of food is time-consuming, expensive and error-prone, and we typically only do it after a food poisoning incident has occurred. There is no way to test food on the spot. What we need is a food testing system that is a lot more accurate and effective than current methods, and is so accessible that it can be used by anyone, anywhere. Incredibly, this is going to become a reality within the next five years, thanks to two independent teams of researchers at IBM. The first team, headed by Geraud Dubois, has figured out a way to “spy” on the bacteria living in the food, gathering data about them. Using this data, they are able to make extremely accurate predictions about the state of the food. Like our bodies, the food we eat is full of bacteria, some good for us and some dangerous. Different food types contain different types of bacteria, while a single food item might have different types of bacteria at different stages in its life cycle. For example, chicken has a set of microbes living on it, while pork has a completely different set. Under normal circumstances, we should not find pork microbes living on chicken, or vice versa. If we do, then that is a strong indication that cross-contamination has occurred. Similarly, fresh bread might be populated by certain bacteria, while stale bread might host completely different species. These bacteria number in the millions, and there may be tens of thousands of different species on a single food item. The challenge is to identify the different species, know the good from the bad and, very importantly, to understand what the presence of certain bacteria might mean in a food type. Once we can overcome this challenge, it will be possible to determine if a certain food is good to eat or not. Geraud and his team have been sequencing the DNA and RNA of microbes on food and, using big data analysis, were able to a create a massive database of all the microbes produced in the world in the last 10 years. AI will enable home bacteria detection. While it is great having a comprehensive database of microbes, that information will be useless to us unless there is some way to actually detect the types of microbes living on our food. The second team of researchers at IBM is solving this problem by developing sensors that can detect bacteria on food items using nothing more than a cellphone. The sensors, which will be capable of detecting bacteria as small as 1micron (75 times smaller than the width of a human hair), will scan the food item for bacteria, cross-reference these against the microbe database developed by Geraud’s team and provide information within seconds. Within a few years, the sensors will be small and cheap enough to be present everywhere: in our fridges, on kitchen tops and even on cutting boards, as well as on supermarket shelves and fridges. All we will have to do is hold a sensor up to a food item, and within seconds it will tell us if the food is fresh or not. Between these two technologies, the world’s food industry will be completely transformed. Not only that, but we will also be able to live much healthier lives because, for the first time in history, we will have the ability to accurately detect if a food item is good for us, or not.

Smart cities key to Earth’s future as urbanisation balloons

In 2006 the world experienced a unique phenomenon: for the first time in human history, the number of people living in urban areas equalled the number living in rural areas. This was due to what is often labelled as the “biggest mass-migration in history”, where the world’s population is moving en masse away from rural areas into urban areas. Since then, this mass migration has accelerated, and as of the end of last year, there were about 4.2billion people living in urban areas – 55% of the Earth’s population. By 2050, 67% of the world’s population, a full 6.5 billion people, will be living in cities. In other words, the population of the world’s cities will increase by 2.3 billion over the next three decades. That is a huge increase. Naturally, this increase in population will place tremendous amount of pressure on the infrastructures of the world’s cities, with many cities already being dangerously close to complete infrastructural collapse. The added pressure affects nearly every aspect of a city’s operational infrastructure: public transportation, water supply, power supply, sanitation, solid waste management and others. Governments around the world are painfully aware of this massive challenge, and many are deploying technology such as the Internet of Things, artificial intelligence and big data to help overcome these problems. Singapore has become the gold standard for smart cities around the world thanks to the extensive effort the city has made in solving many of its problems, using technology. At the heart of Singapore’s smart efforts is big data. The city collects more data than any other about pretty much everything that goes on within its boundaries. Data is collected about everything from crowd density to traffic, pollution, wind flow and even the health of its senior citizens. The data is then analysed by artificial intelligence software to make important decisions. For example, models developed from data about the flow of wind through the city are used to determine where buildings should be built, and how they should be oriented. Proper airflow through the city effectively reduces the amount of air-conditioning required, thereby reducing the city’s energy consumption. Dubai is a close contender to Singapore, with a number of smart solutions being implemented, such as the Dubai Now app which allows citizens to do things like pay utility bills and fines, log faults, report violations and call taxis. Istanbul is Europe’s largest city, the economic capital of Turkey and a major world tourist destination. The city of 15million people faces a dual public transportation problem: to effectively and efficiently transport its own citizens as well as the massive annual influx of tourists. Their solution was to implement massive upgrades to their public transportation network, which comprises four primary modes of transport: buses, trams, subway trains and ferries. Public transportation is now cheap, clean and safe, and moves tens of millions of people annually. For example, the ferry boats alone transport 60million people annually. Additionally, the Metro Istanbul app enables passengers to easily get from place to place. A simple search will tell you exactly which modes of transport you will need to take to get to your destination, and at what time the next transport will arrive. But the world’s leader when it comes to public transport is China. Shanghai’s high-speed railway system has the capacity to move nearly 75million people in and out of the city daily. These people live in areas far out of the city limits, where land and housing are cheap. A solution of this type would go a long way towards resolve the housing crises in South African cities. Recently there has been a spate of land-grab attempts in cities around the country, where people desperate for a place to live began to occupy state- and municipality-owned land. Johannesburg is addressing this by converting unused factories into low-cost housing, but this is a short-term solution. Soon, even those will run out. Barcelona has also entered the smart cities race thanks to a number of smart solutions they’ve implemented over the years. One such solution is the underground garbage disposal system, which eliminates the need to physically pick up garbage from bins around the city. Instead, bins are connected to an underground chute system which “sucks in” the garbage and transports it to the nearest dump. These are just some of the cities that are using smart solutions to solve their ongoing challenges. If these cities prove anything, it is firstly that any city can be a smart city, and secondly that every city has unique problems and will have to find unique solutions. As a developing country with a high unemployment rates we need to empower our own citizens to find solutions to our problems as far as possible, rather than outsourcing to other countries. This will create a win-win situation where our cities will become smart cities and, in the process, numerous employment opportunities will be created.

Part 1: IBM’s predictions for our technological future

What are the five technologies that are going to fundamentally reshape business and society in the next five years? Some of IBM’s leading researchers shared their thoughts on this at IBM’s annual Think conference, which was held in San Francisco this month. A key part of the annual Think conferences is IBM’s “5 in 5” technology predictions, where the tech giant showcases some of the biggest breakthroughs coming out of their research facilities around the world, presented by the people working at those research centres. This year’s predictions are all related to challenges presented by the world’s ever-growing population, with the global population expected to cross the 8 billion mark within five years. According to Arvind Krishna, IBM’s senior vice-president for cloud and cognitive software, “to meet the demands of this crowded future, IBM researchers are exploring new technologies and devices, scientific breakthroughs and new ways of thinking about food safety and security”. He sums up these new innovations as going “from seed to harvest to shelf, table and recycling.” The breakthrough technologies include an artificial intelligence system that will help small farmers around the world to optimise farmland usage, thereby increasing production. Internet of Things devices will prevent food waste, while big data systems will protect us from bad bacteria. Sensors will enable us to detect bacteria in food using our cellphones and new plastic recycling technology will save the oceans. Kenyan computer scientist Juliet Mutahi, the daughter of a coffee plantation owner, said one of the challenges faced by small co-operative farmers like her father was that they lacked the scientific and technological resources to acquire vital data about their farms that would enable them to make informed decisions about how to use their land optimally. A start-up called Hello Tractor is fixing this by developing a device fitted with a number of sensors that constantly gather important data about the farm. The device is mounted on tractors and as the farmers go about their normal day-to-day activities, the sensors gather information on the weather, dimensions and elevation of the farm, then uploads the data to a blockchain. A separate device is used to get information about the soil and the water table. The farmer simply takes soil samples and places them onto the device, which is about the size of a business card. The device tests the soil sample and submits the data to the blockchain alongside the data from the tractors. The two are combined to produce a “digital twin” of the farm, which is basically a digital representation. Data from thousands of farms around the world can be gathered in this way, and the collective data is then processed by artificial intelligence software to make recommendations on optimal land usage. The system is also able to make accurate predictions on future crop yields based on the region, land size, elevation, soil health and other data. Optimal farmland usage might raise food production, but how much of this food will end up on tables around the world? Sriram Raghavan, vice-president of IBM Research in India, said almost half of all the fruit and vegetables produced in the world was wasted because of inefficient and chaotic distribution systems. The result was that too much food was delivered to some areas while others were left out. The excess food was not consumed by anyone and so went bad, leading to large-scale wastage and millions of dollars in lost revenue. With timely and accurate data on hand, the excess could have been diverted to areas where it was needed, possibly even to places where there was a food shortage or hunger. In the next five years, this problem will become a thing of the past. Devices will track the movements of fresh produce along every step in the supply chain from source to table, gathering all kinds of data like temperature, ripening and how close the food is to spoiling. The data will be stored in the blockchain and processed by AI programmes which will, over time, develop high-level models of the movement of food through the supply chain. These models will then be used to make more accurate and effective recommendations for food produce logistics, minimising over-supply and wastage. Between these two breakthroughs, there is hope that the world’s food supply problems can be solved. Although it is still too soon to tell, there is even a possibility that such technologies could provide the solution for world hunger.

Don’t dismiss President Ramaphosa’s smart city idea

The recent announcement by President Cyril Ramaphosa about developing a “smart city” in South Africa has created quite a stir and has triggered a lot of conversation. Unfortunately, it seems not many people are thrilled about the idea. Much of the negativity stems not from the concept of the smart city itself, but from the state of the country and the challenges that South Africans face. As one person put it: “How smart will the president’s smart city be during load shedding?” The issue, it seems, is not that South Africans don’t support the president’s idea of a smart city or of technological advancement in general; it’s simply that they’re wary of promises of a futuristic hi-tech utopia when we have bigger problems at home, such as load shedding, youth unemployment, rampant poverty, crime and a whole host of other things. People want to get the basics right before moving forward. It doesn’t make sense to buy a 65-inch ultra high-definition LED television when your home has huge gaping holes in the roof. But I won’t go to the extent of saying that there’s no place for smart cities in South Africa. On the contrary, I believe that, like in so many cities around the world, technology can provide effective solutions for many of the problems South African cities and their residents face. The part I’m sceptical about is the idea of a new, standalone smart city, one that’s set up separately from existing cities. This, in my opinion, is a disaster waiting to happen, for a number of reasons. First, I don’t believe there’s such a thing as a “smart” city; rather, ordinary cities that have found smart solutions to their problems. In any case, how would you define a smart city? How smart is smart? How long is a piece of string? Secondly, the age-old saying that “necessity is the mother of all invention” applies to smart cities as it does to everything else. There has to be a need, a problem that needs solving, before we can implement technology as a solution. There’s a simple logic that applies to any technology investment: if it solves a problem, it will be useful and if it doesn’t, then it will be a waste of money. No one goes to a store and buys a piece of technology, only to decide later what to do with it. We start by identifying a need or a problem and then invest in the technology as a solution. This may seem like simple logic, but it’s shocking how many governments and businesses fail to apply it, leading to massive investments in tech that no one uses. The playbook is all too familiar: someone at the higher levels of the organisation catches on to a technology buzzword and decide that it will be good to implement it in their organisation. This is followed by discussions around the latest tech innovations, smart technologies, Fourth Industrial Revolution and artificial intelligence. People cite case studies of companies and governments who experienced massive success with these new and groundbreaking technologies. This creates a sense of #Fomo or “fear of missing out” and everybody wants to be seen as the driver of innovation and progress. Typically, budgets are set aside, teams are set up, consultants are brought on board and the transformation projects kick off. Unfortunately, most such projects end in dismal failure. Why? They start with the technology and then try to work out the “best fit” for that tech in their organisations. This approach is a recipe for disaster. The correct approach would be to start with the problems and bring the technology to solve those problems. Each city has its own challenges: Singapore has the challenge of limited usable space, leading to issues relating to food production, fresh water availability and housing. Istanbul has the challenge of massive numbers of tourists. Delhi has to deal with dangerously high levels of air pollution. South African cities undoubtedly have their own, unique problems. Once the problems have been identified, it’s time to move to the next major focus area: people – those who will drive change by finding the solutions to the problems. This raises a huge lingering question around the president’s vision: do we have the people with requisite skills to drive the concept of smart cities? If we’re lacking in any way, then before we can take another step, we will need to develop our people. We can’t rely on technology vendors or foreign governments to solve our problems. We need South Africans to solve South Africa’s problems. So, rather than dreaming of a non-existent, imaginary smart city with who knows what technology, it’ll be more pragmatic if we outline the major developmental and socio- economic problems that plague people in our current cities, and then find smart solutions to these problems. In this way, we can eventually make every South African city into a smart city.

Can technology be racist?

Can computer algorithms be racist? This was a subject of a heated debate in the US recently when Congresswoman Alexandria Ocasio-Cortez claimed that facial-recognition algorithms are biased against people with darker skin. This was a huge claim that, if proven correct, could have serious societal implications. For example, algorithms that are racially, culturally or gender-biased could prevent women and people belonging to certain races and cultural groups from getting bank loans and being considered for jobs. They could also force them to pay higher interest and insurance premiums. Some people reacted to Ocasio-Cortez’s claim by mocking her, saying that algorithms are driven by maths, so can never be biased. Others came out in support of her. A number of experts in the field of artificial intelligence chimed in, saying she is indeed right: algorithms can be, and in many cases are, in fact, biased. But how is this possible? An algorithm in the most general sense is a step-by-step procedure for solving a mathematical problem. Algorithms may be as simple as calculating the area of a rectangle or as complex as calculating the trajectory of a rocket in space. These algorithms work on a basis of inputs and outputs: you put data in, you get data out. They are straightforward in the sense that they have a set of inputs and predictable outputs. Clearly, there can be no bias there. Around the 1950s we saw completely new types of algorithms emerge, known as machine learning (ML) algorithms. This refers to a branch of artificial intelligence that allows computers to learn and improve their performance over time by analysing existing data. ML algorithms are highly complex algorithms designed to analyse data, make inferences from the data and then adapt. Over time these machines learn our search habits and then customise the search results to our habits and preferences. As a result, two people searching for the same term will probably get completely different results. For example, search for “Java” will likely yield results relating to coffee for a person who regularly searches for coffee, while the same search term will result in links relating to computer programming for a coder. YouTube search and recommendations work in a similar way. If you search for a specific topic – Italian recipes, for example – you will continue to see recommendations for videos relevant to Italian recipes long after your search. As advanced and complex as they are, these algorithms have one thing in common with their simpler counterparts: they need data inputs in order to function. If that input is garbage, then they will simply return garbage. In computer science circles, this is described more concisely as “garbage-in, garbage out”, more commonly known by its acronym, “Gigo”. There have been a number rather shocking examples of Gigo in recent times, such as when Microsoft’s chatbot named “Tay” began to use racial language within a day of its launch. The chatbot was intended to be an experiment in “conversational understanding” and it was hoped that, like a child, it could learn by listening to, and engaging with people. Tay began on a positive note, making statements like “human are super cool”, but unfortunately things spiralled out of control very quickly, and it began to make statements like “I just hate everybody” and others that are too disturbing to mention. By the end of the day, it began to sympathise with the Nazis. Naturally, Microsoft had to pull it down. What went wrong? Simply put, Tay was innocent when it went live, but got into the wrong company who fed it garbage, resulting in garbage output. The case with Amazon’s artificially intelligent recruiting tool was less dramatic, but a lot more insidious. The system was meant to automatically screen large numbers of resumes and pick out the best people for the job. But they found out that it had a serious problem: it didn’t seem to like women. It was discovered that the system did not screen resumes in a gender-neutral way, and was biased against women. It even penalized applicants who used words like “women’s” and terms like “women’s chess club champion”. Although it was eventually shelved, the root cause was found to be with people, not the system itself. It apparently picked up its bias against women by observing the company’s recruitment patterns over the previous 10 years. In other words, the algorithms basically picked up on existing biases and simply automated them. Can we ultimately say algorithms are biased? Machine learning algorithms certainly do not start out life with biases but, like children, they will probably pick up those biases along the way, depending on the attitudes of the people who interact with them. If they are found to have biases, we need not look for the problem in the machines, but in their creators. Machines will, after all, only detect and automate existing bias.

Computers make giant leap

Lee Se-Dol had never felt so helpless before. His opponent was getting better with each game, and coolly countering all of his best moves. It was a surreal experience. Lee is a legend of his time, the best player on the planet, and one of the best players in recorded history. His playing style was described as creative, intuitive, wild and unorthodox. But he had never faced an opponent like this before. He was sure he would win at least four games – after all, the opponent was just two years old, while he had been playing the game for pretty much his 33 years. But here they were, three games later, and Lee had lost all. How had it come to this? Go is one of the most complex strategy games in the world. The game is played on a 19×19 grid with black and white stones, and the basic rules are fairly simple: choose a colour, move your stones across the board, and try to corner all your opponent’s stones until he has no moves left. But beyond that, the game is complex, largely due to the fact that there are more possible legal moves in a game of Go than there are atoms in the known universe. Although logic plays a part, the game relies heavily on creativity and intuition to succeed. Unlike in chess, there are no proven strategies to win a game. Each game is different, and players have to rely on their “gut feeling” to succeed. This made it impossible to programme a computer to play Go. In a game such as chess, programmers would simply input the rules for the game, as well as strategies, and the computer would use this information to play. In a typical game, the computer would play by mapping out every possible move from the current state of the board, and every subsequent move based on that one. But that was impossible in a game like Go, where there are an incomprehensible number of possible moves that are beyond even the most powerful computers to calculate. Clearly, traditional programming techniques are out of the question. Yet, the temptation to create a Go-playing algorithm was too great to resist. Because Go is so much more complex than chess, if such an algorithm could be created, it would be a huge step forward for the field of artificial intelligence. In 2014, scientists at the Google DeepMind, a company focused on artificial intelligence, began work on a Go-playing algorithm called AlphaGo. The approach they used was what is known as “unsupervised machine learning”. What that meant was that they would not programme the computer at all, nor teach it how to play. Instead, they would show the computer a large number of Go games, and let it figure the rules and techniques out by itself. The computer was shown around 30million Go games and it learned to play the game. Over the next few months, it continued learning, and even began playing against itself. Although AlphaGo was making progress, all predictions were that it would be some time before it be- came good enough to play against a human. In an article on Wired, Professor Alan Levinovitz predicted that it would take at least a decade before a computer Go champion emerged. By late 2015, the DeepMind team decided it was time to test AlphaGo against a human. The player chosen was Fan Hui, the European Go champion. A match was arranged, and AlphaGo beat him five games to nil. This was a momentous occasion: for the first time in history, a machine had beaten a human in a game that relied not just on logic, but on creativity and intuition. When the news broke, the world was abuzz with excitement. It wasn’t just that a computer had beaten a human at a game: the long-term implications were astounding. Were computers now becoming creative and intuitive? AlphaGo had proven to be too good for Fan Hui, but people wanted to know how good it really was. Fan may have been the European champion, but he was rated only 633 worldwide. How would AlphaGo perform against the world champion? In March 2016, a game was arranged between AlphaGo and the world’s number one player, Lee Se-Dol. By the third game, things were not looking good for Lee: he had lost all three games. He, nonetheless, gathered his wits and came back to win the fourth game. It wasn’t enough to win the overall contest, but at least it was a win. But AlphaGo wouldn’t stop there. It won the fifth game, giving it a 4-1 victory against the world champion. In essence, the new world champion Go player is not a human, but a machine. Predictions are that we will see more technological advancements in the next decade than we saw in the previous century. People wonder how this accelerated pace of advancement will be possible, until they realise that it will be driven by machines. A few years ago, this idea would have been met with scepticism because innovation and techno-logical advancement require higher-order thinking abilities, such as creativity, intuition and problem-solving abilities – things that are strictly human traits. AlphaGo has blasted this notion out of the water, and has shown us that computers can learn, and are capable, to some degree, of creativity and intuition.

Advancing faster than we thought

Over the past century-and-a-half, we saw more technological advancements than all other periods of human history combined. There really are no surprises there: from the height of the First Industrial Revolution in the mid-19th century to the mobile revolution of the early 21st century, it has been an incredible period in human history, and our world has changed forever. The real surprise comes in where scientists claim that, despite the massive progress we’ve made over the past century, in the next decade we will see more technological advancements than in the previous century. This is surprising, and one will be excused for doubting the validity of this claim. After all, a century’s worth of advancement will be surpassed in just a decade? And not just any century, but the 20th century, the century of peak human accomplishment. Is it possible? The answer is yes, it is possible, and for one primary reason: while the technological advancements of the previous century were driven by human beings, those of the next decade will be driven by machines. Machines will be solving the world’s most complex problems, whether those problems are of a business, scientific or social nature. Computers are now smarter than ever and have the ability to sense the world around them, to think, identify patterns, make decisions and to learn. This is thanks to a field of artificial intelligence known as “machine learning”. Machine learning is where machines are not programmed by humans to perform certain tasks, as they were in the traditional sense, but are instead taught how to learn and to continuously improve. All we do is to provide them with basic machine learning algorithms and lots of data, and they learn to figure things out by themselves, just like little kids exploring the world around them. It is a fascinating yet frightening thought, that our creations are now able to evolve and improve themselves beyond anything we might have imagined. To understand how machine learning works, consider a scenario where we need a computer to sort through pictures of cats and dogs and place them into the appropriate “cat” or “dog” categories. The one way to do this is to “teach” the computer the difference between cats and dogs by feeding it thousands of pictures of cats and dogs, and tagging each one as either “cat” or “dog”. By scanning, studying and analysing the pictures, and then associating them with the tags, the computer will, over time, be able to identify specific facial and body traits that differentiate a cat from a dog. In other words, the computer will learn to recognise a cat from a dog. This kind of machine learning, called “supervised machine learning”, is very common. In fact, most of us have been actively teaching computers to recognise certain images, without even realising it. For example, have you ever used one of those rather annoying “Captcha” features on websites that require you to identify text or images to prove you are a human and not a robot? Did you know that by answering the questions correctly we are actually “teaching” the computer what is in the image? The computer remembers our responses and, in the future, will use that image to identify similar objects. For example, if you identify a picture of a traffic light, the computer will use that picture to identify traffic lights in other pictures. It is a simple, yet powerful way to make computers smarter. Have you ever wondered how social networks are able to identify people in pictures? You guessed it: we teach them. Whenever you post and tag a picture of yourself or anyone else, the system remembers who is in the picture. Then, whenever that person appears in an image, it simply recognises them by comparing them with the images you have uploaded. Computers use a similar method to learn just about anything: pictures, handwriting and voice commands. And they are constantly learning: the machines around us, from social media systems to home automation systems, cellphones and even smart watches, are constantly observing us, and constantly learning and improving. It is only a matter of time before they will become as proficient and natural as us when it comes to looking, listening and making sense of the world around us. The key difference between computers and us is that they are much better and faster at processing vast amounts of data, whether it is in the form of pictures, audio, text or numbers. By combining the ability to sense and recognise the world around them with their immense processing abilities, computers will be able to identify and solve problems that are too complex for humans to tackle. In fact, complex artificial intelligence algorithms are already busy solving a number of problems in the business and scientific worlds. It is these super-intelligent algorithms that will drive innovation of the future. As for us? Well, we’ll just have to play catch-up.