As technology advances at an ever-increasing clip, the data produced by these new and improved technologies will hold the key for innovation and agility.
One little-known axiom can explain the age of iPhone releases, artificial intelligence, and the Internet of Things: Moore’s Law. Even if you have never heard the name, Moore’s Law will be instinctually familiar to anyone who’s even remotely followed these developments. The theory dictates that computer chip performance doubles every 18 months. These frequent exponential improvements amount to almost unfathomable increases in data processing speeds and capabilities over the past few decades. The iPhone 5S of today has a processor that is more powerful than the Cray Supercomputers of yesteryear. What would require a large building and a water-cooling system now fits in a pocket.
Such advances are not only happening in computing, but also in the fields of robotics, sensors, artificial intelligence, synthetic biology, 3D printing and medicine. Futurist Ray Kurzweil noted that as any technology becomes an information technology, it starts advancing exponentially. That is what is happening in these fields. They are generating vast amounts of data – which combined with what already exists open up possibilities for solving grand problems and transforming industries.
Here are some examples of the technology advances.
In 2000, scientists at Celera Corporation announced that they had raced ahead of the US government in decoding the DNA of a human being. They used latest sequencing technology plus the data available from the Human Genome Project to create a working draft of the genome. It took decades and cost billions to reach this milestone. Today, full sequencing costs about $3000 and can be done in a day. The price of genome sequencing is dropping at double the rate of Moore’s Law. If this continues – as is likely – a full human genome sequence will cost less than $100 within five years and will happen in minutes.
Micro-Electro-Mechanical Systems are enabling the development of inexpensive gyroscopes, accelerometers, and temperature, pressure, chemical and DNA sensors. With these, we can build iPhone cases that act like medical assistants and detect disease; smart pills that we swallow to monitor our internals; tattooed body sensors to monitor heart, brain and body activity; and soil and humidity detectors to control irrigation.
Artificial Intelligence (AI) has progressed to the point that a computer was able to defeat the most capable and knowledgeable humans on the TV show Jeopardy. In the show, a computer used AI technology to learn enough knowledge from the web to defeat human contestants. The technology that enabled this – IBM Watson – is now available to developers everywhere. AI systems are being trained to perform medical diagnosis, drive autonomous cars and operate call centres. They are finding their way into manufacturing and powering robots that do human chores.
3D printers can transform materials such as plastic, ceramics, glass and titanium into mechanical devices, medical implants, jewelery and even clothing.
The cheapest 3D printers, which print rudimentary objects, currently sell for between $500 and $1,000. Soon we will have printers for this price that can print toys and household goods. By the end of this decade, we will see 3D printers doing the small-scale production of previously labour-intensive crafts and goods. In the next decade, we may be 3D-printing buildings and electronics.
Regenerative medicine has been used to implant lab-grown skin, tracheas and bladders into humans. Soon, 3D printing technologies will grow human cells, layer by layer, to make replacement skin, body parts and eventually organs such as hearts, livers and kidneys.
Also, Kinkos-like production shops are synthesizing DNA for researchers to create new organisms and synthetic life forms. Using synthetic biology, geneticists are putting individual gene sequences together, just as Lego blocks, to construct living cells. They are creating new organisms that have new functions. They are developing algae that can be converted into fuel, biosensors that detect disease and plants that glow in the dark.
DNA “printing” is priced by the number of base pairs to be assembled (the chemical “bits” that make up a gene). Today’s cost is about 28 US cents per base pair and prices are falling dramatically. Within a few years, it could cost a hundredth of this amount. Eventually, like laser printers, DNA printers will be inexpensive home devices.
Using nanotechnology, engineers and scientists are developing many new types of materials such as carbon nanotubes, ceramic-matrix nanocomposites (and their metal-matrix and polymer-matrix equivalents) and new carbon fibres. These new materials enable designers to create products that are stronger, lighter, more energy efficient and more durable than anything that exists today.
Again, all of these are information-based technologies.
Over the centuries, we gathered data on things such as climate, demographics, and business and government transactions. Our farmers kept track of the weather so that they would know when to grow their crops; we had land records so that we could own property; and we developed phone books so that we could find people.
About 15 years ago, we started creating web pages on the internet. Interested parties starting collecting data about what news we read, where we shopped, what sites we surfed, what music we listened to, what movies we watched and where we travelled. And they began to correlate this with information about our gender, age, education, location and socioeconomic status. Then, with the advent of LinkedIn, Myspace, Facebook, Twitter, and many other social-media tools, we began to volunteer private information about out work history, social and business contacts, and what we like – our food, entertainment, and even our sexual preferences and spiritual values.
Today, there are more than 100 hours of video uploaded to YouTube every minute and far more video is being collected worldwide through the surveillance cameras that you see everywhere. Mobile phone apps are keeping track of our every movement – everywhere we go; how fast we move; what time we wake.
Now combine all of these data with the exponential technologies I detailed earlier, and you have the ability to create world-changing innovations (as well privacy and security nightmares).
Consider what could happen if we correlated information about a person’s genome, lifestyle habits and location with their medical history and the medications they take. We would understand the true effectiveness of drugs and their side effects. This would change the way drugs are tested and prescribed. And then, when genome data becomes available for millions, perhaps billions of people, we could discover the correlations between disease and DNA to prescribe personalized medications – tailored to an individual’s DNA. We are talking about a revolution in health and medicine.
In schools, classes are usually so large that the teacher does not get to know the student – particularly the child’s other classes, habits and progression through the years. What if a digital tutor could keep track of a child’s progression, likes and dislikes, learning preferences, and strengths and weaknesses? Using data gathered by digital learning devices, test scores, attendance and habits, the teacher could be informed on which students to focus on, what to emphasize and how best to teach an individual child. This could change the education system itself.
And then combine the data that is available on a person’s shopping habits with their social preferences, health and location. We could have shopping assistants and personal designers creating new products including clothing that are 3D printed or custom manufactured for the individual. An IBM Watson-like assistant could anticipate what a person wants to wear or to eat and have it ready for them.
Data can assist human decision-making in almost every sector. Analyzing large amounts of data from different perspectives can unearth new insights and prevent errors. These data can be used to decide where a new store will be located, when to water a field or spray it for insecticides or when a police car should patrol a neighbourhood. It this exponential era, data is the key to competition and productivity.
Artificial intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages.
It is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology.
No computers are able exhibit full artificial intelligence (that is, are able to simulate human behaviour). But the greatest advances have occurred in the field of games playing. The best computer chess programmes are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.
Vivek Wadhwa is vice-president of research and innovation at Singularity University; director of research at the Center for Entrepreneurship and Research Commercialization at Duke University; fellow, Stanford University; and distinguished visiting scholar, Emory University.
An adapted version of this article appeared on the Dialogue Review website.