These are heady times. The recent years have been very productive in terms of scientific and technological advance. After so many decades, we’re no longer talking about the ascendance or preeminence of just one field such as biology, physics, chemistry, computer science, or engineering. Everything is now more tightly interwoven and interdependent. An advance in one field is almost certain to push other fields forward in significant ways.
This article highlights some of the technology areas that are expected in the next few years at least to continue to capture or drive significant amounts of capital and infrastructure investments, research and development, intellectual manpower, ecosystem development, and consumer and enterprise market adoption.
Hype can be very effective in promoting interest in a technology. But too much of it can lead to the technology’s untimely, if not perhaps undeserved, demise because of festering misgivings about the overhyped technology. Also, some of these emerging technologies bring with them potential to radically transform our lives for the better. At the same time, some of them could wield just as powerful, if not greater, potential to inflict unforeseeable repercussions unto many. The hope is, of course, that each technological advance will eventually engender more benefits than disadvantages while also improving our predicaments.
But we need to have a tempered sense of what is real and what merely is a possibility; to confuse one for the other could seriously impede whatever progress we have made. So, we present in this list both the exciting possibilities that each technology offers and the challenges each of them faces.
We also provide short descriptions of the technologies in this article. By now, many people have likely read and heard about AI, CRISPR, bots, blockchain, VR, AR, etc. The fact is, most people probably only have the slightest inkling of what these technologies really are and how they work (for example, ask someone you know what the definition of “cloud” is). So, we hope that our short technology descriptions will help provide a better sense of how these technologies will impact our lives.
Here goes our list:
1. EHR/Telehealth/Telemedicine
Healthcare is one of the most important target markets for many emerging and established technology companies. They are promoting the use of smart and predictive analytics systems, electronic health records (EHR), AI-based applications, telemedicine, wearables and IoT, and novel patient monitoring systems to implement much-needed improvements in the healthcare industry. We focus on EHR and Telehealth/Telemedicine in this article.
EHR vs. EMR
At least in the US, electronic health record (EHR) and electronic medical record (EMR) are not considered technically synonymous. EHR is attributed a broader meaning and considered to encompass EMR. It is the term adopted by the Office of the National Coordinator for Health Information Technology (ONC), Centers for Medicare and Medicaid Services (CMS), and the Office of the National Coordinator for Health Information in the US.[1] EMR refers to a digital version of paper patient records that contains a patient’s diagnosis and treatment history, which is intended for use only within a healthcare facility. On the other hand, EHR pertain to records that include not only those that relate to a patient’s medical diagnosis and treatment but also those that relate to the patient’s overall health. EHR can therefore contain data not found in typical hospital patient records. For example, EHR might include data relating to a patient’s remote heart rate monitoring as part of a regular exercise regimen or therapy outside a hospital setting, or whether the patient is strictly following a diet recommended by a dietitian from a different healthcare facility in another city or state.
An important difference between EHR and EMR is that the former is specifically designed to allow secure sharing of a patient’s medical records among various healthcare providers. Rather than being accessible only within one healthcare organization or facility, as is the case with EMR, EHR essentially goes wherever the patient is. So, a patient’s EHR can include combined records inputted by healthcare practitioners from different clinics, hospitals, physical therapy and rehabilitation facilities, or nursing homes that are all involved in the patient’s care.
Telehealth vs. Telemedicine
Similarly, the term “telehealth” is technically considered to be distinct from “telemedicine.” The former covers a wider range of remote patient-care related services and technologies provided at a distance via the internet or cloud-based connectivity. Telemedicine refers specifically to remote clinical services, while telehealth can include non-clinical remote services. Examples of non-clinical remote services include providing online-based general health and fitness information, education, training, programs, and counseling, as well as continuing fitness and wellness monitoring for patients. In both cases, patients themselves may need to use, acquire, or lease devices, such as wearables, microphones, video and other imaging systems to allow online preliminary remote doctor consultations, continuous monitoring of their health status, management of chronic ailments, etc. A third term “telematics” is sometimes used, which combines telehealth and telemedicine into a single general category.
Promises: personalized medicine; prompt and appropriate emergency treatments; reduce human errors via EHR, streamline patient data collection, patients can also have access to the patient’s records, facilitate data sharing, improve medical diagnosis and accuracy; telemedicine and telehealth, in conjunction with wearables and IoT, promises to reduce medical costs, save significant time and provide tremendous convenience via remote patient monitoring and medical consultation, real-time interactive services, access to medical advice in emergency cases, reduce unnecessary hospital visits
Challenges: data and system incompatibility, data security concerns, lack of standards, barriers to adoption (cost, training, perceived risk of adopting one platform over another, etc.); some patients may not have access to or cannot afford to pay for reliable internet connections and any required hardware or software; insurance coverage, billing, and reimbursement issues; state law restrictions to medical practice across state lines; other complex government regulations; reducing time required to input patient data and increasing time for patient-doctor interaction; interoperability issues among different hardware and software platforms or systems
2. Blockchain
A blockchain is a distributed and decentralized digital ledger organized into data segments called blocks. [2] This digital ledger is essentially a digital record of transactions involving various types of currencies and documents including contracts, wills, receipts, invoices, titles, deeds, patents, etc. The data blocks are connected using an encryption-based verification system. Each block identifies and refers to the previous block, thus creating a sequential, continuous, and permanent chain. Attempting to alter, hide, or delete previously-recorded transactions or records would disrupt the chain and alert all users. The ledger is not kept in a single location or managed or controlled by a single central entity but is simultaneously distributed across many computers. The blockchain can also be configured to automatically initiate transactions. Any entity such as an individual, company, organization, or machine can transact with other entities with little or no need for an intermediary.
Blockchain is not the same as Bitcoin, which is a form of decentralized digital currency, also referred to as cryptocurrency. Specifically, blockchain has been used as a platform for Bitcoin, i.e., as a digital ledger for recording Bitcoin transactions.
Promises: can be used for various kinds of instruments including contracts, bills, wills, patents, virtual currencies; transactions are decentralized so no need for many intermediaries; transaction are transparent, relatively secure, immediate reconciliation and payment of transactions
Challenges: still in early stages, slower transactions in cases where transactions with higher rewards are given priority, transaction errors can be hard to undo, high costs in a supply-and-demand scenario; instability of bitcoin currency values, bitcoin exchanges susceptible to hacking, potential for bitcoin exchange collapse, highly speculative digital currency markets with limited regulations; could take a while before it becomes widely-adopted, transactions can be very complex, still not well-understood by many.
3. Synthetic Biology
Synthetic biology is directed to the design and construction of biological-based components and systems not found in nature. It is a rather loosely-defined term, which generally refers to an interdisciplinary field that applies engineering principles to biology through the use of known DNA synthesis methods, our latest knowledge on genomics, various experimental techniques, computer simulation, etc. It allows rapid synthesis of known DNA sequences and their assembly into new genomes, e.g., assembly of new microbial genomes from a collection of known or previously-cataloged genetic components that are then introduced into a microbe or cell. Modified bacterial chromosomes can be designed and created for use in the production of biological-based pharmaceuticals, biofuel, chemicals, and food ingredients. Modified cells, microbes, or organisms can also be used to make possible complex or novel synthesis methods, e.g., produce enzymes having certain biological functions needed for a multi-step natural products synthesis.
Promises: advance personalized medicine, facilitate drug and vaccine discovery, design and create biological sensors and circuits, high-yield crops, biological-based drug-delivery systems; allow large-scale recombinant protein production and synthesis of materials, programmable materials, industrial enzymes, and therapeutic bacteria that identify and correct disease states; potential for large-scale pharmaceutical discovery; applications in medicine, materials, agriculture, chemicals
Challenges:some areas still unproven, potential adverse effects on human health and environment, potential for uncontrolled spread of new organisms or modified genetic materials; potential to displace farmers and worsen economic inequality; security concerns; potential threat to biological diversity; potential for use in illicit drug or substance production
4. CRISPR/Base Editing
Genome editing or gene editing refers to technologies that allow an organism’s DNA to be altered by adding, removing, or replacing a genetic material at certain positions in the genome. The genome refers to the entire set of genes or genetic material found in an organism or cell. Genome editing has generated tremendous excitement for its potential use in medical diagnosis and prevention and treatment of various human diseases ranging from single-gene disorders (e.g., hemophilia, cystic fibrosis, and sickle cell disease) to more complex diseases (e.g., heart ailments, cancer, HIV infection).
Various genome editing techniques have been developed. CRISPR (e.g., CRISPR-Cas9) has invigorated biological scientists because it promises improved speed, lower cost, better accuracy, and greater efficiency compared to existing genome editing methods. In this technique, a small piece of RNA is synthesized and used to bind to a specific target DNA sequence in a genome and to the Cas9 enzyme. The modified RNA is used to identify the DNA sequence. The identified DNA sequence is then cut by the Cas9 enzyme at a specific location. The cell’s own DNA repair mechanism is then exploited to allow insertion or deletion of genetic material pieces. The DNA’s repair system can also be used to alter the DNA by substituting an existing DNA portion with a synthesized DNA having a specific sequence.
Base editing is a more recent genome editing technology that allows direct and irreversible conversion of a specific DNA base into another at a targeted genome location. [3] A major advantage of this technique is that, unlike CRISPR-Cas9 and other genome editing techniques, it offers higher editing accuracy and makes use of an inhibitor called BE4 that minimizes unwanted editing side products. This has potential for disease prevention and treatment because base editing can directly correct some types of point mutations, which are associated with many diseases.
Promises: relative simplicity, can be applied directly in embryo, potential for curing various diseases such as cancer and genetic disorders, obesity, and Alzheimer’s disease; speed up identification of genes associated with diseases and development of therapies
Challenges: potential for editing wrong part of the genome, potentially unpredictable outcomes such as unintended mutations in the genome; various ethical concerns such as use for designing physical attributes of human embryo and other non-therapeutic purposes; research on non-therapeutic use could delay those for therapeutic purposes; therapies can be extremely expensive so only those who can afford will reap the benefits; possible loss of human diversity
5. Mixed Reality/3D Projections
Literature on technologies relating to virtual reality (VR), augmented reality (AR), etc. occasionally use different terms with ambiguous meanings such as mixed reality (MR), assisted reality, hybrid reality, artificial reality, or synthetic reality, in addition to the more commonly-used AR and VR. But even with the terms AR and VR, the literature is replete with inconsistent and arbitrary definitions and usage. For example, one definition referred to VR as including MR, AR, and everything else, while most articles define VR as an environment with entirely, if not mostly, digital content.
Milgram et al. has defined the term mixed reality in their 1994 paper as something falling within a reality-virtuality continuum [4]. According to their definition, “mixed reality” refers to a combination of both real and virtual objects presented within a display. The reality-virtuality continuum is bounded by two endpoints corresponding to either a real or virtual environment comprising only real objects on one end (reality) and only virtual objects on the other (virtuality), respectively. Everything else within those two limits constitutes mixed reality. Hence, the term mixed reality according to this definition would include AR and presumably other variations such as hybrid reality and assisted reality.
If VR is considered to contain entirely virtual or digitally-generated content (which is a common interpretation used or suggested in many articles), then it cannot fall under MR because it would correspond to Milgram et al.’s “virtuality” endpoint.
But if VR is defined to also include display environments that contain more virtual than real components (the argument being that an AR must have a counterpart, AR being one that has more real than virtual content), then VR and AR would both fall under Milgram et al.’s more general category MR. In any case, the terms synthetic reality and artificial reality, etc. would probably be more similar to VR than AR.
It would certainly be helpful if standards set by the industry would eventually clarify these confusing meanings and seemingly-arbitrary interpretations.
Promises: provide immersive sensory experience, many useful potential applications in various fields (medicine, entertainment, gaming, manufacture, education, leisure, etc.), vast potential market, enhance collaboration and interactions in business and social networks, facilitate training, demonstrations, and learning, allow use for viewing digital products and services catalogs, enhance social, gaming, and online shopping experience, potential as an all-in-one communications, control, and monitoring system
Challenges: dearth of compelling content, battery life and portability issues, affordability, ecosystem and platform compatibility/interoperability issues, data latency and frame rate issues, limited field of view, prohibitive costs for some systems, potential wireless bandwidth limitations
6. AI/Bots/NLP
AI/Bots/NLP/CybersecurityAs used in this article, AI refers to a general class of algorithms, software, devices, and systems designed to simulate human intelligence to allow the performance of various functions, including both repetitive and labor-intensive work and those that humans excel at doing such as pattern and image recognition. It encompasses neural networks, machine learning, deep learning, expert systems, fuzzy logic, genetic algorithms, etc.
Bots are software or algorithms designed to automate tasks such as answering basic client queries on a website’s customer service section, scheduling appointments or meetings and providing notifications using a calendar in your email client, or retrieving and displaying information. Nowadays, many bots are designed to provide some ability to engage in human-like conversations with users, such as when a customer orders certain products or services online. These bots are called sometimes called chatbots, which are often embedded in messaging apps. As AI becomes more sophisticated, bots’ capabilities are expected to expand.
Natural language processing (NLP) is a branch of artificial intelligence, which in combination with other disciplines like computational linguistics and computer science, allow computers to engage in some form of human-like conversations. NLP relies on various methods to decipher and mimic human language (e.g., rules-based and algorithmic techniques, machine learning, statistical analysis) to allow for variations in applications and data types such as text and voice. NLP divides human language into shorter, basic units to understand the relationships between portions of human language and determine how these units are connected to generate meaning. Some of the tasks that NLP can perform include searching, indexing, language translation, creating summaries of text, converting speech to text and vice versa, and classifying topics or content. NLP can also perform more sophisticated tasks such as topic, tone, and opinion extraction from a body of text.
Promises: speed up scientific discovery; enhance work efficiency, learning, planning, business and personnel management, transportation traffic management, device and utility power management, supply chain, propose better government policies; provide more timely and accurate medical diagnosis and treatment; improve product time to market; provide access to personal digital assistants; enhance communications; improve risk management strategy; protect and strengthen network security, etc.
Challenges:displace workers in certain fields, need to provide more welfare benefits to jobless citizens, higher taxes to support expanded welfare system, potentially widen economic inequality, inherent limitations to many complex tasks that humans can do, not all displaced workers can be trained to do more and more complex tasks, potential for misuse with serious and far-reaching consequences, unintended consequences, various ethical issues, data privacy issues, control issues, how to ensure AI decisions are consistent with human values
7. Internet of Things (IOT)
IoT in a broad sense refers to a collection of wireless or wired devices, wearables, transportations, appliance, etc. that are interconnected via the internet or cloud and are able to perform one or more of the following within the group of connected devices with or without prompting from human users: identify and communicate with each other, transmit or exchange data, transmit or receive instructions, send or receive notifications, collect data, measure various types of variables, store or retrieve data, display information, etc. The connected devices, appliance, transportations, etc. can include any combination of sensors, software/algorithm, processor, storage, transceiver, and power supply for communications, object identification, or for measuring, detecting, storing, and transmitting or receiving data relating to various types of variables such as speed, orientation, position, location, environmental variables (e.g., temperature, humidity, pollution levels) and health-related variables (e.g., heart rate, body temperature, breathing patterns).
Promises: help manage our life, work, health and time; improve therapy and rehabilitation; manage, monitor and control devices, equipment, factories, transportations, buildings, environmental control systems; improve supply chain operations; enhance planning and predictive capabilities; facilitate scientific discovery
Challenges: security and privacy issues, interoperability and compatibility issues, potential to widen economic inequality, potential for misuse, potential for much more substantial downtimes when devices and machines break down, those more valuable human tasks aren’t necessarily suitable for many people; inherent physical and mental limitations to what humans can do
References
1 EMR vs EHR – What is the Difference?,” P. Garrett and J. Seidman, Ph.D., Jan. 4, 2011, https://www.healthit.gov/buzz-blog/electronic-health-and-medical-records/emr-vs-ehr-difference/
2 “Understanding the fundamentals of IBM Blockchain,” https://www.ibm.com/blockchain/what-is-blockchain.html.
3 Komor et al., “Improved base excision repair inhibition and bacteriophage Mu Gam protein yields C:G-to-T:A base editors with higher efficiency and product purity,” Science Advances 30 Aug 2017: Vol. 3, no. 8, eaao4774, DOI: 10.1126/sciadv.aao4774, https://advances.sciencemag.org/content/3/8/eaao4774.
4 Milgram et al., “Augmented. Reality: A Class of Displays on the. Reality-Virtuality Continuum,” Proc. SPIE: Telemanipulator and Telepre-sence Technologies, vol. 2351, 1994, pp. 282-292.