Author: Bilal Kathrada

Smartphones on way out, Augmented Reality on the way in

The days of the cellphones are numbered. Do you remember the last time you watched a movie on a VCR, listened to your favourite tracks on a Walkman, or sent your camera film to a processing centre? Probably not. Over the years, these technologies have become obsolete, and have been largely replaced by smartphones. This is the nature of the fast-changing world we live in. As technology progresses, we will continue to see new devices emerging and old devices becoming obsolete. No surprises there. What will come as a surprise to many is that the next device on the endangered list is none other than the cellphone itself. How is this possible? How will the world as we know it function without cellphones? I mean, sure, the old devices like VCRs were replaced by smartphones, but what could possibly replace our smartphones? The truth that many of us don’t realise is that we’ve already begun to replace our smartphones. Case in point: it is possible to make a call without picking up your cellphone or removing it from your pocket. While this may have been impossible to do just a few years ago, today all you need to do is to talk to your phone: “Hey Siri! Call Mum.” Simply plug in a pair of earphones, and you will be in business. Today’s voice-activated, natural-language processing virtual assistants like Apple’s Siri and Android’s Google Assist have become increasingly advanced and powerful. Besides making calls, they’ve become adept at other functions like opening apps, playing audio, sending messages, searching the web, setting reminders and making appointments. They also make cellphone usage safer. If you are driving around and need directions, there is no need to take your eyes off the road and pick up your phone. You can simply ask your virtual assistant for directions, and it will guide you to your destination, step by step. Of course, not all functions of your cellphone can be handled through voice commands. We still need to look at a display for activities like reading messages and checking calendars. Enter smart watches. In 2018 more than 45 million smart watches were sold, and there are some good reasons for this. Apart from the built-in features like fitness tracking and heart-rate monitoring, smart watches allow you to quickly make and receive calls, read and respond to messages, check schedules, take voice notes, and play audio like music and podcasts. Smart watches provide yet another reason not to look at your phone, but with a tiny, one square inch screen, they definitely won’t replace cellphones. The final nail in the cellphone coffin will actually be a new and emerging technology that is progressing in leaps and bounds: augmented reality. AR is a technology that will eventually bring your cellphone’s display right up to your eyes, giving you a heads-up, hands-free display wherever you are looking, making your entire field of vision into a huge display. In essence, you will no longer need a physical cellphone or a tablet PC. Unlike virtual reality, AR will not block your view of the real world. You will still be able to see the world as normal, but things that would normally show up on a cellphone screen will be overlaid on to your field of vision, and you will be able to choose what appears in your vision. For example, you might choose to display call and email notifications in a corner of your field of vision, while web surfing, videos and apps might occupy the central region. Companies like Microsoft, Apple and Magic Leap are putting their weight behind AR, and we could see practical, lightweight AR glasses within five years. In fact, a start-up in Los Angeles is already taking things a step further by developing AR contact lenses. A big question around AR is, How will we interact with it, since there will be no interface like a touch screen or a keyboard? There are some interesting solutions to this, which I will discuss in more detail in the next article. I believe that physical cellphones and tablets have outlived their usefulness, and it’s time for a change. We don’t think of it much, but in reality, mobile devices are notoriously unwieldy and intrusive. To operate them, we need to look directly at them and divert our gazes away from the world around us, which could be dangerous. Then, mobile devices need at least an entire hand to operate, which is usually our dominant hand, vastly reducing our ability to do any physical work. Added to that, they are extremely fragile; they fall, they bend, they break. And they get stolen. Previously we did not have any choice but to use physical devices, but with new technologies such as augmented reality on the horizon, it seems the days of the good old physical devices are numbered.

A glimpse into the future of computing

Thousands of people flocked to Las Vegas this month, not for the usual attractions, but to attend the Consumer Electronics Show (CES), which is the world’s biggest annual electronics expo, where all the big names in tech bring out their latest tech toys for the world to see. Among the new gadgets on show were a roll-up television, folding smartphones, domestic robot servants for the elderly, and robot legs that help you walk. There was one announcement that perhaps did not raise as much excitement as the others, but is arguably the most significant step forward in computing since the 50s. It was when IBM unveiled the world’s first commercially available quantum computer, named the Q System One. The moment was reminiscent of the unveiling of Univac 1 in 1951. Univac 1 was the world’s first commercially available computer. It weighed more than 8 tons and took up 35m² of space, needed dozens of people to operate and had its own cooling plant. Yet, it was not much more powerful than a modern pocket calculator. A modern cellphone has nearly a million times the processing power and fourmillion times more memory. Nonetheless, it was state-of-the-art at the time, and back then no one could have predicted what computers would be like 70 years in the future. Similarly, the Q System One is a pioneer in its space. It isn’t perfect and to sceptics it might seem that we’ve stepped back 70 years to the Univac. But in reality, the Q System One has very little resemblance to any past or current computer. And the unveiling couldn’t have come at a better time. Modern computers are extremely powerful, but not powerful enough to solve a large number of highly complex problems. For example, in the pharmaceutical industry, scientists developing medicines could ideally do with ultra-powerful simulators that would be able to accurately predict what effect certain drugs would have on the human body. This would not only eliminate live testing, but also accelerate the process of developing new medicines. Due to the complexity of the human body and of the drugs themselves, the power needed to run these simulations is beyond even today’s supercomputers. Then there is the challenge of big data. We generate massive amounts of data every second and this data needs to be analysed to solve many problems in the business and scientific worlds. A good example of this is the Square Kilometre Array in the Karoo. At its peak, it will generate 35 000 DVDs worth of data every second. That’s a lot of data to analyse. Unfortunately, a lot of this data will never be thoroughly analysed because we simply do not have sufficient computing power to do so. Worse still, we may not be able to build more powerful computers for some time. At a very basic level, computers are powered by transistors and over the years, transistors have become exponentially smaller, allowing manufacturers to cram more of them into processors, making those processors exponentially more powerful. The trouble is, we’ve come to a stage where the processors are so small that they are nearing the size of a few atoms. We cannot go smaller than this, because the laws of physics do not allow it. Hence, we will not be able to make more powerful computers unless we find a radically new approach. The solution lies in quantum computing. Quantum computers do not rely on the same laws of physics as our transistor-based machines do and, to top it off, they are insanely powerful. This comes from the fact that, instead of using physical transistors for processing, they use the amount (quanta) of energy stored in electrons. Eric Ladizinsky, a co-founder and chief scientist at D-Wave Systems, gives a good analogy to describe the power of quantum computers. Let’s say we were looking for a specific mark on a page in a book in the library of congress (which has 50 million books). A normal computer would accomplish this by scanning through every title, one at a time, until it finds the mark. A quantum computer, on the other hand, will not search through the books one at a time. It will search through all 50 million books simultaneously. With this incredible power, quantum computers will revolutionise computing, and will transform pretty much every industry. Some of its most critical applications will be in astronomy, medicine, pharmaceuticals, transport and cybersecurity. The quantum computing field is still in its infancy, there is still a long way to go before quantum computers surpass classic computers in usefulness but, as with the Univac computer, given time and more research, who knows what they will look like in 70 years time? Quantum computers are definitely the future. Perhaps not our future, but the future of generations to come.

Hi-tech overhaul of SA education system will fall apart without strategic plan

Recently, the government announced that it was planning a major hi-tech overhaul for the South African education system. As an educational technologist who sees the tremendous potential of technology to radically transform education, I was elated. Finally, the government was taking steps to bring the education system on par with other countries, giving our learners a fighting chance to become globally competitive. I was sure they would call in some of the country’s best minds to formulate a strategic rollout plan that would address our education crisis by leveraging technology to make quality education available to all kids, especially those in the deepest rural areas. What would that strategy look like? Of course, there would be a pilot roll-out at a number of selected schools across the country. That goes without saying because it is Change Management 101. Concurrently, there would be intensive teacher training initiatives, a major update of the curriculum, and perhaps my ultimate dream would come true: that computer science would become a core part of the curriculum. Apparently, that’s not the case. There is no talk of a strategic plan or change management. No curriculum revamps, nor teacher training. What is this big overhaul? Handing out tablet PCs to more than 2000 schools across the country. What a disappointment. This is not going to go well. First, let’s consider the cost: 23 000 schools will mean roughly 23 million learners. A decent tablet costs roughly R2000, bringing the total cost to around R46 billion. Factor in digital textbooks and logistics and the amount could easily double. That is a huge amount of money. If investing that kind of money will solve our education problems, then it will be totally worth it. But spending that amount of money on tablet PCs is definitely not going to fix anything. Others have tried and failed, with disastrous consequences. In 2013, the Los Angeles Unified School District embarked on a plan to put an iPad into every child’s hands. The iPad would be preloaded with eLearning content. To accomplish this, they partnered with Apple and Pearson. The budget for the roll-out was estimated at $1.3bn (R18.1bn). It was a highly ambitious project, and there was a lot of optimism. After all, all the ingredients for success were there: one of the most progressive school districts in the US partnering with one of the world’s largest and most beloved tech companies and a highly-renowned publisher. What could go wrong? The trouble is, the project went south very quickly and, aside from a colossal waste of money, there were other serious repercussions: there was a whole lot of blame-shifting, lawsuits and even an FBI investigation. But the biggest disaster by far was that classroom technology took a giant leap backwards. People became skeptical about classroom technology. If these three power players couldn’t make a go of it with such a massive budget, then is there even a place for technology in the classroom? Why did the roll-out fail so dismally? That answer is simple: the focus was on technology, not on people. The initiative was driven by the district officials and the vendors, but those who really mattered, the teachers, had almost no involvement in the planning phase, nor did they receive any training and mentorship. When the iPads arrived, they simply did not know what to do with them. What’s really worrying is that indications are we are about to dive headlong into the same fire. The right approach is to begin by asking “why?” Why are we doing this? What specific problem are we trying to solve? What are the desired outcomes? Once we have this in place, the next question is “who?” Who are the major role-players who will drive this initiative? What skills would they need to make it a success? The answers to these questions will help us to develop a strategic plan which will guide the roll-out. To mitigate risk, a phased roll-out will make sense. Only once the strategy is in place should we focus on technology. Strategy should always drive the technology, never the other way around. You buy a device because you have a specific need. You never buy a device and then try to figure out what to do with it. The latter is insanity. The above approach can be summarised into what I call the “Four Pillars of Technology Implementation”: purpose, people, pedagogy and platform, where platform refers to the hardware, software and connectivity required. It is my sincerest hope that our government takes heed to the massive failure in LA and steers clear of repeating their mistake. I know that handing out tablets sounds like a great idea, but in the end, when the initiative fails, the biggest victims will be the 27 million learners we thought we were helping.

Ted Dabney, the man whose life was a game

Last year, the world of computing lost one of its great pioneers, Ted Dabney. He was the founder of Atari and the creator of Pong, one of the most iconic video games made. Dabney is known in the industry as the man who brought video games to the world and launched the video gaming industry. In the early 1970s, video games were only available on commercial computers, which typically cost hundreds of thousands of dollars and, hence, were too expensive for home use. Dabney decided to use his knowledge and skills as an electrical engineer to make a low-cost gaming computer that was affordable enough for an average household. He started by converting his daughter’s bedroom into a makeshift laboratory and gathering some cheap television parts. He then began creating his gaming machine and succeeded shortly afterwards. He housed the circuitry in a box made out of plywood and mahogany laminate and created the world’s first commercial video game, Computer Space. Dabney partnered with Nolan Bushnell to launch the company Atari, and took the newly-built consoles to market. The first game was not a commercial success, but Dabney followed it up with a second game, Pong. The game was extremely simple by today’s standards: two vertical lines on the left and right edges of the screen were the paddles, while the ball was just a dot bouncing between them. Players used the device’s controllers to move the paddles up or down to block the movement of the ball and hit it back towards the opponent’s side. Despite its simplicity, the game was fun and engaging and became a massive commercial success, launching Atari into one of the biggest gaming console and computer companies in the world. Pong has a personal, sentimental value to me. It is one of the first video games I ever played, and I have the fondest memories of playing the game with my dad on an Atari 2600 console when I was around 8 or 9 years old. Not many people have ever heard of Dabney, but with the creation of the Atari console he has earned his place in computing history. Like him, there are countless others who have laid the foundations for the computing technology that is an integral part of our lives today. One such person is Muhammad Ibn Musa al-Khwarizmi, an 8th century Persian mathematician who laid the foundations for modern maths and pretty much every modern programming language by developing concept of the algorithm. Al-Khwarizmi observed that people struggled to perform calculations, and calculations were often fraught with errors. He set out to develop simple algorithmic formulae where people simply needed to plug in values, and they would get the desired results. A simple example is the formula for calculating the area of a room: “Area = Length x Breadth”. To calculate the area of a rectangular room of any size, one had to simply measure the dimensions of the room and plug them into the formula. This may seem simple enough to us because we’ve been taught those formulae from a young age. In al- Khwarizmi’s time they did not exist. So significant was al-Khwarizmi’s contribution that his name became synonymous with algorithms. The word “algorithm” is actually derived from the word “al-Gorithm”, which is the Latinised form of his name. His work transformed nearly every field: business, academia, science, agriculture, legal and of course, computing, although that came much later. Fast-forward nearly a millennium to the 19th century, and the first computer was created by an English mathematician and engineer by the name of Charles Babbage who needed a quick and automatic way to calculate complex algorithms, leading to the design of his “analytical engine”, a mechanical calculator and the predecessor of the modern computer. Of course, even a simple, mechanical computer such as Babbage’s analytical engine needed a programmer, and the work went to Ada Lovelace, an English mathematician who developed the algorithm to run on the analytical engine. In doing so, she became the first computer programmer in history. There is a recurring theme in all of the examples above, whether it was Dabney, al-Khwarizmi, Babbage or Lovelace: they saw a need or a problem, and used their knowledge and skills to find a solution. Their drive came from a passion to make life better for themselves and others. We see the same theme in all inventors and innovators in history, from the great scientists and inventors of the past to the young geniuses of Silicon Valley: where others saw problems, they saw opportunities, which motivated them to create solutions that positively affected countless people’s lives. This is the driving force behind technological advancement and, for good or bad, this trend will persist as long as there are problems that need solving.

Calling a remote PHP Script from JavaScript using JQuery

In this tutorial I will show you how to call a remote PHP script (a script that lives on a separate server) from your JavaScript application using JQuery.

Creating a simple Database Application in NetBeans

In this tutorial we create a simple database Java application in Netbeans. The database will contain two tables: one for authors and one for books.

Career tips for the 21st century

Artificial intelligence has defeated humans yet again. In the past, machines outdid their human creators in manual activities that did not require much thinking, like assembling vehicles and harvesting crops. But work that required knowledge, creativity and the ability to think and make decisions were securely within the realm of human activities, and no technology could achieve what humans did in that space. That has changed. Earlier this year, an artificially intelligent system beat a group of lawyers at their own game. In a competition run by the legal AI platform LawGeex, a group of experienced lawyers competed against a computer to find loopholes in a non-disclosure agreement. The humans ultimately scored 85%, while the AI system scored 95%. Just a 10% difference, nothing too impressive. But the real shocker is in the completion times. The human team completed the task in 92 minutes, which was excellent by most standards. The AI system, on the other hand, was done in just 26 seconds! So much for the days when only blue-collar jobs were under threat from automation. It is common knowledge that technology and automation have not just caused massive job losses, but also obliterated a number of businesses, professions and types of work. Fears about job losses from automation are nothing new; they date from almost 200 years – to the Luddite Revolution, when a radical faction of textile workers protested against job losses through automation in their field by destroying textile machinery. Whereas their efforts earned them a place in history books, they achieved little: the entire textile industry was transformed, and they did ultimately end up losing their jobs. But that was by no means the end for them. Fortunately, human beings are an intelligent, resilient species who are capable of learning and adapting. The Luddites eventually adapted to the changes, taught themselves new skills and entered new professions. This has been a recurring theme since then. Except things are very different now. The machines that drove the First Industrial Revolution are nothing compared to the thinking, learning, adapting machines that are driving the Fourth Industrial Revolution. Machines have the ability to do the types of jobs that were possible only for humans just a few years ago. Naturally, this raises concerns and fears about the future of jobs. People want to know how to prepare themselves, and the next generations, for this turbulent and ever-changing work environment. What careers are best to get into? How can we ensure we don’t become obsolete in a few years? These are questions I encounter almost daily, more so at this time of the year when young people are choosing careers paths. In this past week alone, I was invited to speak at two events and a radio programme about the exact same topic. My standard advice to everyone, whether they are young people making a start, or people in the job market, is this: “prepare for uncertainty, embrace change”. It is impossible to tell what the world will look like in a decade, or even in five years. Scientists predict there will be more technological advancements between now and 2025 than there was in the past century. This will drive massive change in the global job markets. Change is inevitable, incessant and constant. Gone are the days of linear career paths, when people were expected to finish school, go to university, get a degree, get a job, work for the same company for the rest of their lives, and retire with a pension and a gold watch. Today, people typically don’t keep a job for more than five years, and that’s the world we should prepare for. It’s a tough environment out there, and to survive I usually recommend the following five tips: * Embrace technology, rather than fear it. Don’t become the Luddites of today. Technology and innovation are unstoppable forces and will continue to move forward and affect pretty much every profession. Those who embrace technology in their careers will certainly have a huge advantage over those who do not. For example, a teacher who is adept at using technology in the classroom will be in much greater demand than the one who isn’t. * Adopt a growth mindset. Keep learning, keep adapting and keep your eye on the trends. * Focus on skills more than on qualifications. A degree is worth much less today than it was two decades ago. Today, companies want to see if you can do the job, not if you can pass exams. Not that having a degree has no value; having a degree plus skills, is definitely more powerful. * Develop your soft skills, like innovative thinking, teamwork and leadership. These will always be in demand. * Build a strong personal brand. Thanks to social media, it has never been easier to show the world what you are capable of and what you stand for. Constantly document your learnings and your achievements through a blog, a YouTube channel, a LinkedIn profile or any avenue of your choice.

Why every child should learn to code

What do Bill Gates, Mark Zuckerberg and Nick D’Aloisio have in common, other than they are all phenomenally wealthy tech entrepreneurs? The answer: they started very young. Gates coded his first computer game at the age of 13, way back in the late ’60s. He had had a lucky break: his school’s “Mothers’ Club” raised some money to buy a teletype machine, basically a dumb terminal consisting of a keyboard and a screen, that connected to a remote GE time-sharing computer. Gates took to the new device like a duck to water, and began writing his first code almost immediately. Seven years later, he went on to co-found what was to become the world’s largest software company, Microsoft. Zuckerberg’s dad began teaching him to code using the BASIC language on an Atari computer before he was 10 years old. Like Gates, he found a passion for coding, and soon began writing some amazing programs. He created an instant messaging system which he called “ZuckNet” and a music player called Synapse Media Player with built-in artificial intelligence that could learn the user’s listening habits and recommend tracks. All this he did while still in high school. This gave him the reputation as a computer programming prodigy. Then, in his sophomore year at Harvard, which is equivalent to Grade 10, he wrote a program to help students to select which classes they should take based on a number of criteria. He called the program CourseMatch, and it was an instant hit with students, who struggled to select classes. Soon afterwards, in 2004, Zuckerberg began to create a website that allowed shy young men to meet women by putting up a picture of themselves along with a bio and other personal information like preferences and hobbies. He called the site TheFaceBook. It was immensely popular among students, and he saw its potential as a social media platform and decided to take the concept to the world. Soon afterwards, he dropped out of Harvard and began working on an improved version of the website, which he subsequently called FaceBook. And the rest is history. D’Aloisio began coding at 12, and at the age of 15, sold his app, called Summly, to the internet giant Yahoo for $30 million, making him one of the youngest self-made millionaires in the world. All three stories have a recurring underlying theme: each learned to code at a young age. None set out to become rich or even to start a business; that happened afterwards. What they did do, was that they learned to code. Their new skillset allowed them to identify and solve some real-world problems that they themselves were facing. It just so happened that others were facing the same, and there was a ready market. In creating these solutions, they not only became fabulously wealthy, but also changed the world. Coding skills are more relevant now than ever with the onset of the Fourth Industrial Revolution, where technology has pervaded nearly every aspect of our professional and personal lives. This is why I believe that every child should learn computer science and coding skills, whether they intend to get into a career in coding or not. In fact, I would go to the extent of saying that computer science should become a core subject at all schools nationwide, from primary school level. I have two main reasons for this: first, we all need a level of mastery over technology and computers, no matter our career. With rare exceptions, we all use computers and mobile devices at work or for personal reasons. Hence, it is essential that everyone, particularly children, learn how to make the most of this technology, while avoiding the pitfalls. Second, the benefits of coding go far beyond just computers, technology and apps. It was proven via a number of studies that coding helps to develop logical thinking abilities, develops problem-solving skills and encourages creativity. When given a coding task, children were found to focus a lot more intensely than normal, and for longer periods, and they voluntarily persisted on the task. Additionally, coding develops communication skills, because computers are a lot harder to communicate with than humans. I think these are compelling enough reasons for every parent and school to seriously consider making coding and computer science an integral part of their curriculum. There is already a drive in the US and in most European countries, and in many places it has already been implemented. In the US, this was given momentum by ex-president Obama, who launched the “Computer Science for All” initiative while he was still in office. If South Africa is going to play a key role in the global digital economy, we will have to start providing our kids with digital skills from a young age.