Technology in 2016

The year 2016 may at first sight, not go down in history as the most eventful year in technology but in my mind it was a very interesting year of many emerging technologies. It was like a confirmation of a world that is going to be transformed. I guess 2016 was the year of the building blocks. Here are few points on the state of some interesting technologies at the end of 2016.

If one single thing deserves to be named a hit in 2016, it must be augmented reality smartphone app Pokémon GO. This can hardly be considered a breakthrough technology but for some reason it managed to catch people’s attention. This craze caused all sorts of situations where people would wander around to find Pokémon creatures. Groups of people could be seen behaving weirdly in public places. Some pointed out the health benefits of people running around, while other pointed to safety hazards, for example when people drove around catching Pokémons. For Nintendo, the maker of the game, the financial benefits were good. The stock soared and the company became more valuable than Sony.

As for the most fun hardware device of the year, I would say Snap Spectacles. These are colourful sunglasses with a camera. The interesting thing about this wearable is that they are sold in vending machines that are not so widespread and published, which adds a mystery element. The scarcity of the Snap Spectacles makes them sought after items.

Other, perhaps more anticipated devices in 2016, were the Virtual Reality (VR) headsets. New headsets came to market like Oculus Rift, HTC Hive, and Sony Playstation VR to name a few. Many games were released and many game studios are turning to VR. Still, we are in the early stages of this technical wave (see article Through the VR desert). VR is mostly associated with games, which is understandable. However, VR has huge implication in other fields. Enterprise VR is just about taking off, helping businesses do a better job. We can expect to see applications in architecture and design, real estate, medicine, education, and manufacturing.

Video was also very important in the passing year. On Facebook alone, there are around 100 million hours of videos watched every day. The year saw more widespread use of 360 degree videos where you can change the picture by moving your smartphone or tablet. For desktops, the mouse can be used to swing the view point. The BBC technology show Click, aired an whole episode shot in 360 format. This format has be called the “gateway drug” to VR. What is driving this is the dropping cost in cameras.

The year 2016 marks the end of the Smartphone era that started in 2009 (see article Long Live the Smartphone). Over the past few years, new features and improvements made people switch phones every one or two years. But now with Apple 7 released in September 2016, the new phone was incremental. Sure it was better and faster, but not the same rate of improvement as earlier in the wave. However, that did not stop the Fanboys from waiting in line, again.

The Samsung Galaxy Note 7 smartphone released in August 2016 got even more press, but for a reason they might not have liked. The phone had a battery fault and could explode. Companies started to ban the phone in their offices and they were banned on public transport and on airplanes. Samsung had to recall the phones. The whole fiasco will cost the company billions.

The year 2016 also saw developments in voice assistant devices. Amazon released Echo a couple of years ago and this year Google added Home. These devices sit in your home and listen to everything you say and, when asked or directed, they will answer or respond to your requests. Common tasks are playing music, changing lights, and just answering questions. Amazon is expected to sell 5.2 million units in 2016, up from 2.4 million in 2015. The strategy of offering a voice assistant like Alexa may be devilishly cleaver since, according to Business Insider, Echo owners increased their spending on Amazon by 10% after buying the device.

Having a device like this in your house might sound creepy and it is. TechCrunch reports that the police in Bentonville, Arkansas, are asking for data from an Echo device to help solve a murder. At least, the advice is to not plan or commit any crimes while the device is active. I guess the same goes with cheating on your spouse.

Alexa and Home are just one application of AI, artificial intelligence (see article AI is the New Electricity). The year 2016 was the year AI became recognised as really working. Finally after 60 years of hopes and disappointments, interesting solutions are finding their way into everyday services. This was the year DeepMind’s AlphaGO won the Go match with Lee Sedol. The important trend is that the tech giants like Amazon, Microsoft, Google and IBM, are offering services to build AI products. This means that startups have access to enormous computing capabilities without substantial investment. We can expect more AI to enter applications in the coming years.

Of course there are other building blocks, like the Internet of Things, Blockchain, robotics, and drones to name a few, that also advanced during the year. All of these building blocks are ready for innovators to take and use to build new technologies. The 2017 and beyond should be interesting.



The Sharing Economy: For Better or for Worse?

With the rise of the Internet and always-on smartphones, new opportunities for connecting people together became possible. This allows platforms such as Airbnb and Uber (classical mentions) to become very efficient as the coordination of  service providers and users, is easy and cost efficient. The rise of the sharing economy has been fast over the last few years and has caused all sorts of issues, good and bad. One of my New Technology 2016 students, Gunnhildur Finnsdóttir, wrote a paper on this topic:

“My thesis is that, for better or for worse, the sharing economy is going to grow even further and be more and more integrated into out lives so the effects of it so far are likely to be magnified in the near future. The main focus of this research is to examine these effects of the sharing economy on individuals and societies.”

One point she makes is the importance of local. While traditional accommodations can be friendly, visiting a host in their home can be more like staying with someone you know. One reason is that the payment is actually never between the guest and host, as it is taken care of by the platform. Gunnhildur writes:

“And this is at the heart of the appeal of the sharing economy, the transactions it organizes are more than simply exchanging a service for cash, they are framed like acts of neighborly kindness or making new friends in a strange city. This emotional value is combined with the resources of a multinational corporation that has access to a lot of user data and infinite ways of processing it to create a very powerful mixture.”

One point made in the paper is about a major trend that is happening in the connected worlds: the end of ownership. This is about understanding a fundamental shift in the way people behave. Gunnhildur quotes Brian Chesky, CEO and co-founder of Airbnb:

“People still want to show off, but in the future I think what they’re going to want to show off is their Instagram feed, their photos, the places they’ve gone, the experiences they’ve had. That has become the new bling. It’s not the car you have; it’s the places you go and the experiences you have. I think in the future, people will own whatever they want responsibility for. And I think what they’re going to want responsibility for the most is their reputation, their friendships, their relationships, and the experiences they’ve had”

The paper also contains a section describing the sharing economy in Iceland. This small country has been experiencing a travel boom in the past few years.

“The number of travellers who used Airbnb to find accommodation in Iceland in 2015 grew by 152% from the previous year while the traditional hotel business saw a growth of 18%”

If you want to understand the sharing economy – the good and the bad, and how it works in Iceland, check Gunnhildur’s paper out:

The Sharing Economy: For Better or for Worse?


AI is the New Electricity

Brain Cells and Deep Space

People have always been fascinated with mysteries of intelligent robots. So far this has been limited to myths and legends. Form the legend of Golem to HAL 9000 in 1969 film Space Odyssey, our imagination has been fuelled by some unknown intelligence that will take over our lives. Today, artificial intelligence (AI) is entering a stage of being – to paraphrase the author of the mentioned movie, Arthur C. Clark, indistinguishable  from magic.  The impact of AI is going to be huge in the coming years. In a conference in May 2016, Andrew Ng, Chief Scientist at Baidu and one of leading researcher into AI stated: “AI is the New Electricity.” We are seeing the beginning of a shift to a world where software will dominate and control our lives.

The idea of intelligent machines is synonymous with computers. The first computes in the 50s and 60s were called “electronic brains”. Ironically they ware far from intelligent but basically good a calculating both fast and accurate. Despite the consensus  that these machines possessed some form of intelligence it quickly became apparent that machines were good at doing things that humans are not so good at, at least very slow on the average. Calculating 1,000 five digit numbers is both tedious and slow for humans. The risk of mistakes is also pretty high. For computers this is straightforward, fast and accurate. However, it turned out that tasks that humans find easy, such as understanding language or recognising objects in a picture is notoriously hard to program a computer to do.

Artificial Intelligence started as a field sixty years ago in 1956 summer workshop at Dartmouth Collage in the USA. The workshop was organised by John McCarthy, and attended by Marvin Minsky, Claude Shannon, Nathaniel Rochester and others that would become very influential on the field in the decades that followed. The goal of the workshop was to “solve kinds of problems now reserved for humans…if a carefully selected group of scientists work on it together for a summer”. That proved to be embarrassingly too optimistic.

The history of AI is full of “springs” – new hope for new ideas, and “winters” when people realise they hope they had was limited or simply did not work. The general conception was been that AI has never been able to deliver its promise. However, many of the advances in computer science is due to research in AI. As soon as something became practical and worked, for example new way of searching though wast amount of possibilities, it become known as something else. Some ideas just did not work due to the limited capacities of the computers at the time. For example, the ideas of building a computer system that was similar to the brain using the idea of neurons and connection between them, came as early as the 1950s. Some mathematical work was even done before the first computers. However, the computers of the 1950s and 1960 were simply not powerful enough to be able to achieve any success.

The first true public success of AI came in 1996 match between IBM DeepBlue and chess master Gary Kasparov. People realised that machines could become better than people in some cognitive tasks. In 2011, AI hit another milestone when IBM’s Watson supercomputer won the television quiz show Jeopardy. Pitted against the two most successful players the AI managed to win. The game requires understanding of language so this signalled a new era in natural language processing.

In 2012, Google posted a seemingly uninteresting blog labeled “Using large-scale brain simulations for machine learning and A.I.”. In the post, Google explains how they built a neural network, a form of machine learning or deep learning, had discovered how to recognise cats in Youtube videos. If there is anything in abundance in this world it is Youtube videos of cats.

So how does this work? We know how traditional programming works. You write programs, series of commands such as expressions, variable assignments, if-statements and while loops and so on. These instruction tell the computer what to do and the computer will execute the commands. If there is an error or a “bug” you edit your program and run it again. Neural networks are not like this. They are of course programs but instead of programming the task, like finding cats or understanding language, we build a neural network or “brain” and train the network to learn how to do its task. For example, Google’s DeepMind, created an artificial intelligence program using deep learning to play Atari games. The only input the program was how to control the game (for example, move a bar left or right) and that high score should be as high as possible. The program then trained to master the game.

The cat discovery was the beginning of a new AI spring. And of course those who have been following AI research for a long time, like myself, took this with the usual skepticism, sort of “here we go again” attitude. Neural networks did not work in the past, why would they work now?

Three things are now different. First, machine learning algorithms have improved over the years. Many academic papers are published every year and the knowledge increases. Quick search on Google Scholar revealed 638.000 hits dated since 2012. Second, vast resources in computing power where you can build 20.000 GPU (Graphical Processing Units) computer cluster. This is far from the computers in the 1960s. Thirdly, the huge data available to train AI networks. The amount of data generated each day – Big Data, both by people and devices is input for machine learning.

In just the last few years, there has been an explosion in AI solutions coming the market. In most cases this is not obvious since AI, just like electricity, will not be a product but an enhancement to our lives. Just like people wanted light in their houses, not electricity for its own sake, people want the products that AI will bring. It will come in hidden form, making the tools we use more clever and convenient. In few years our personal digital assistant will be something we cannot live without.

This text is based on a new addition to the 2017 edition of my textbook, New Technology