The majority of health care executives (91 percent) are confident they will see a return on investment (ROI) on artificial intelligence investments, although not immediately, and foresee the greatest impact of AI will be on improving health care, according to an OptumIQ survey.
Most (94 percent) health care leaders responded that their organizations continue to invest in and make progress in implementing AI, with 75 percent of healthcare organizations say they are implementing AI or have plans to execute an AI strategy, based on OptumIQ’s survey of 500 senior U.S. healthcare industry executives, primarily from hospitals clinics and health systems, life sciences organizations, health plans and employers. OptumIQ is the intelligence arm of data and analytics of Optum, an information and technology-enabled health services business that is part of UnitedHealth Group.
While many healthcare organizations have plans, progress is mixed across sectors. Of the 75 percent who are implementing AI or have plans to execute an AI strategy, 42 percent of those organizations have a strategy but have not yet implemented it. Employers are furthest along, with 22 percent reporting their AI implementations are at a late stage, with nearly full deployment.
The average AI implementation is estimated to cost $32.4 million over five years. The majority of respondents (65 percent) do not expect to see a ROI before four years with the average expected period being five years. However, employers (38 percent) and health plans (20 percent) expect ROI sooner, in three years or less, according to the survey.
The survey found that health care leaders universally agree the greatest impact of AI investment will be on improving health care. Thirty-six percent expect AI will improve the patient experience; 33 percent anticipate AI will decrease per-capita cost of care; and 31 percent believe AI will improve health outcomes.
Most health care leaders believe AI can make care more affordable and accessible. Ninety-four percent of respondents agree that AI technology is the most reliable path toward equitable, accessible and affordable health care.
AI will make care more precise and faster, according to respondents. The top two benefits respondents expect to see from incorporating AI into their organizations are more accurate diagnosis and increased efficiency.
The survey found that respondents are looking to AI to solve immediate data challenges – from routine tasks to truly understanding consumers’ health needs. Of those health organizations that are already investing in and implementing AI:
43 percent are automating business processes, such as administrative operations or customer service;
36 percent are using AI to detect patterns in health care fraud, waste and abuse; and
31 percent are using AI to monitor users with Internet of Things (IoT) devices, such as a wearable technology
With more organizations seeing the benefit of adopting an AI strategy, 92 percent agree that hiring candidates who have experience working with AI technology is a priority for their organization. To meet this need, nearly half (45 percent) of health care leaders estimate that more than 30 percent of new hires will be in positions requiring engagement with or implementation of AI in the next 12 months. However, health organizations seeking to hire experienced staff will likely face talent shortages.
“Artificial intelligence has the potential to transform health care by helping predict disease and putting the right insights into the hands of clinicians as they treat patients, which can reduce the total cost of care,” Eric Murphy, CEO of OptumInsight, said.
“Analytics isn’t the end, it’s the beginning – it’s what you do with the insights to drive care improvement and reduce administrative waste,” Steve Griffiths, senior vice president and chief operating officer of Optum Enterprise Analytics, said. “For AI to successfully solve health care’s biggest challenges, organizations need to employ a unique combination of curated data, analytics and health care expertise… We are already seeing a race for AI talent in the industry that will grow as adoption continues to increase.”
Game development resource 80 LEVEL has compiled a comprehensive list of the “10 Best Universities for Game Development.” The mini-site, to be updated annually, will help aspiring game developers find the best fit for the programs they’re seeking with information on curriculum, educational staff, student success stories, and more.
To assist students interested in the world of video game development, 80 LEVEL analyzed more than 105 universities for both undergraduate and graduate programs, and these are the results for the top 10:
DigiPen is a private school, which was founded back in 1988 by Claude Comair. In 1998 they became the first school in the world to offer a bachelor’s degree in video game development. Today, DigiPen is a prominent school for games and technology, with campuses in the US (Redmond, Washington), Singapore, and Spain.
The curriculum here is pretty versatile, covering a number of different topics, including computer science, game design, music and sound design, and digital art and animation. DigiPen also has an active R&D department, which develops tech for different clients including Boeing, Formula One and INDYCAR.
University of Southern California is a large private school located in Los Angeles. It was founded in 1880, nearly a hundred years before the videogame industry. The Interactive Media & Games Division was added to the school’s extensive portfolio in 2001. Today, USC Games is considered one of the best in US by the Princeton Review. This was achieved thanks to the close collaboration between the faculty members of Viterbi School of Engineering’s Department of Computer Science and the Interactive Media & Games Division.
Michigan State University was founded in 1855 and is a public research university in East Lansing, Michigan. MSU is one of the largest universities in the United States (in terms of enrollment) and has approximately 552,000 living alumni worldwide. It’s famous for its research contributions, sports activities and game development courses.
The Game Design and Development Program at Michigan State University was founded back in 2005, and has grown leaps and bounds into a Top 10 Ranked program by the Princeton Review. The program involves a mix of disciplines and backgrounds, comprised of Designers, Artists and Programmers.
The Entertainment Arts and Engineering Master Games Studio (EAE: MGS, MEAE) provides a very interesting opportunity to jump into the wonderful world of videogame design for students enrolling at the University of Utah. This educational establishment provides an intriguing cohort model, where students remain together throughout the entire two years of the program! There are four possible tracks to apply for: Game Arts, Game Engineering, Game Production, or Technical Art. Plus there’s a lot of nice electives in videogame development.
Students enrolled in the Master of Entertainment Arts and Engineering degree program (MEAE) are typically interested in careers in interactive entertainment. The curriculum is built with this goal in mind. The university also offers the opportunity to develop and enhance a professional game portfolio through our “studio simulation” projects courses.
MIT is internationally recognized as one of the best technical schools in the world. It’s no wonder they also excel in videogame development. Actually, there’s a whole new division called MIT Game Lab, which deals with game design and e-sports, helping to train the next generation of game creators.
Full Sail University is a private university based in Florida. Widely appreciated for the amazing music education (41 Full Sail graduates were credited on 46 artists’ releases that were nominated in 36 separate categories during the 2017 Grammy Awards), this school also provides many courses for future game developers. There are a number of Bachelors and Masters degrees available in Game Art, Game Design, Game Development and Mobile Gaming. Most of these courses are available online as well as on campus.
The Center for Games and Playable Media at UC Santa Cruz was formally established in 2010, building on work done since the founding of their videogame degree. The center houses the school’s five games-related research labs including the Expressive Intelligence Studio — one of the largest technical game research groups in the world.
There is a great diversity in the faculty’s topics of research. Projects range from work on artificial intelligence and interactive storytelling to natural language dialogue systems, cinematic communication, procedural content generation, human computer interaction, rehabilitation games, computational photography, and level design. Members of the group have published in some of the most respected journals in the fields of game studies, game AI, and game culture. Currently, the group has more than 20 active research grants on games and is the only non-European university taking part in the European Union’s SIREN Project.
Oklahoma Christian University provides a wonderful opportunity to get a degree in gaming and animation. Game artists and animation students are introduced to the tools and principles used by the animation and game development industries. Integral to the university’s game development philosophy is the notion that you don’t just spend your time in classes, but also go on studio and conference field trips, and explore animation and game development career opportunities. The curriculum is pretty broad, covering traditional, 3D, and experimental animation. Their classes do not just concentrate on 3D modeling, but also involve learning texturing, rigging and game production. Also included are courses on the history of film, video, and animation, which can serve as an excellent way to get the necessary background knowledge for game development.
The Garvey Center for the Arts is the home of the program. It offers spacious studios and labs, drafting/drawing tables, easels, model stand and ample computer equipment as well. Plus there’s a 1,200-square-foot University Art Gallery, where works of prominent artists and students are featured.
Entertainment Technology Center (ETC) at Carnegie Mellon University was founded in 1998. ETC is a professional graduate program for interactive entertainment, which mostly focuses on a two-year, Master of Entertainment Technology (MET) degree, which was established as a joint venture between Carnegie Mellon University’s School of Computer Science and the College of Fine Arts.
The School of Design and Informatics is the home of Abertay’s undergraduate and postgraduate degree programmes in games, digital arts, cybersecurity and applied computer science. Abertay was the first university to offer degrees in Computer Games Technology and Ethical Hacking, and continues to be recognised as an international leader in its fields; the school is designated the National Center for Excellence in Computer Games Education, and has pioneered integrated cross-disciplinary practice-based learning through its workplace simulation approach and the White Space environment. In 2015, it was designated by the Princeton Review as the best school in Europe to study game design.
The School undertakes research and knowledge exchange activities to ensure the development and health of its subjects and disciplines within the University as a whole. The School is also home of Dare Academy and the Securi-Tay conference. It has long-established professional links with Dundee’s thriving computer games community and international companies including Microsoft, Rockstar North, and Sony, as well as industry bodies such as BAFTA, UKIE and TIGA.
“We hope the 80 LEVEL top 10 universities site becomes the go-to guide for future game developers to help them choose the school of their dreams,” said Kirill Tokarev, co-founder and editor-in-chief of 80 LEVEL. “We put a lot of thought into the list, and we will continue to expand it and add relevant information as we update every year.”
The 80 LEVEL top-10 universities scoring is based on over 15 criteria, divided into four categories, which enabled the team to analyze the schools, identifying the best of the best. To obtain the final rating of a particular university, results for each criteria were summarized, providing an overall rating for the university. The results of the calculations were compared with the published ratings by the Princeton Review and Game Designing, with similar yet diverse results. For more information, please visit http://universities.80.lv/
Amazon lost control of a small number of its cloud services IP addresses for two hours on Tuesday morning when hackers exploited a known Internet-protocol weakness that let them to redirect traffic to rogue destinations. By subverting Amazon’s domain-resolution service, the attackers masqueraded as cryptocurrency website MyEtherWallet.com and stole about $150,000 in digital coins from unwitting end users. They may have targeted other Amazon customers as well.
The incident, which started around 6 AM California time, hijacked roughly 1,300 IP addresses, Oracle-owned Internet Intelligence said on Twitter. The malicious redirection was caused by fraudulent routes that were announced by Columbus, Ohio-based eNet, a large Internet service provider that is referred to as autonomous system 10297. Once in place, the eNet announcement caused Hurricane Electric and possibly Hurricane Electric customers and other eNet peers to send traffic over the same unauthorized routes. The 1,300 addresses belonged to Route 53, Amazon’s domain name system service
In a statement, Amazon officials wrote: “Neither AWS nor Amazon Route 53 were hacked or compromised. An upstream Internet Service Provider (ISP) was compromised by a malicious actor who then used that provider to announce a subset of Route 53 IP addresses to other networks with whom this ISP was peered. These peered networks, unaware of this issue, accepted these announcements and incorrectly directed a small percentage of traffic for a single customer’s domain to the malicious copy of that domain.”
eNet officials didn’t immediately respond to a request to comment.
The highly suspicious event is the latest to involve Border Gateway Protocol, the technical specification that network operators use to exchange large chunks of Internet traffic. Despite its crucial function in directing wholesale amounts of data, BGP still largely relies on the Internet-equivalent of word of mouth from participants who are presumed to be trustworthy. Organizations such as Amazon whose traffic is hijacked currently have no effective technical means to prevent such attacks.
Tuesday’s event may also have ties to Russia, because MyEtherWallet traffic was redirected to a server in that country, security researcher Kevin Beaumont said in a blog post. The redirection came by rerouting traffic intended for Amazon’s domain-name system resolvers to a server hosted in Chicago by Equinix that performed a man-in-the-middle attack. MyEtherWallet officials said the hijacking was used to send end users to a phishing site. Participants in this cryptocurrency forum appear to discuss the scam site.
In a statement, Equinix officials wrote: “The server used in this incident was not an Equinix server but rather customer equipment deployed at one of our Chicago IBX data centers. Equinix is in the primary business of providing space, power and a secure interconnected environment for our more than 9,800 customers inside 200 data centers around the world. We generally do not have visibility or control over what our customers – or customers of our customers – do with their equipment.”
The attackers managed to steal about $150,000 of currency from MyEtherWallet users, most likely because the phishing site used a fake HTTPS certificate that would have required end users to click through a browser warning. Still, Beaumont reported, the attacker wallet already contained about $17 million in digital coins, an indication the people responsible for the attack had significant resources prior to carrying out Tuesday’s hack.
The small return, when compared to the resources and difficulty of carrying out the attack, is leading to speculation that MyEtherWallet wasn’t the only target.
“Mounting an attack of this scale requires access to BGP routers are major ISPs and real computing resource [sic] to deal with so much DNS traffic,” Beaumont wrote. “It seems unlikely MyEtherWallet.com was the only target, when they had such levels of access.”
Another theory is that Tuesday’s hijacking was yet another test run. Whatever the cause, it’s a significant development because anyone who can hijack Amazon cloud traffic has the ability to carry out all kinds of nefarious actions.
Post updated to add comment from Equinix and Amazon.
Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it.1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.
Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.
In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.2
In order to maximize AI benefits, we recommend nine steps for going forward:
Encourage greater data access for researchers without compromising users’ personal privacy,
invest more government funding in unclassified AI research,
promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
create a federal AI advisory committee to make policy recommendations,
engage with state and local officials so they enact effective policies,
regulate broad AI principles rather than specific algorithms,
take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
maintain mechanisms for human oversight and control, and
penalize malicious AI behavior and promote cybersecurity.
Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.”3 According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up.4 As such, they operate in an intentional, intelligent, and adaptive manner.
Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.
Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.
AI generally is undertaken in conjunction with machine learning and data analytics.5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.
AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.
AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways.6
One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.”7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.
Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.”8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.
Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion.9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.”10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.”11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.
A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions.12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location.13 That dramatically increases storage capacity and decreases processing times.
Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels.14
AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.”15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.”16
Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.
The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.
While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict.17
Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.
Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030.18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs.19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well.20
AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.”21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.
What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.
AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.”22
AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity.23
Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:
Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates.24
However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.”25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.
Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.”26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists.27 Put differently, China has become the world’s leading AI-powered surveillance state.
Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector.28
Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps.29
Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.
Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.
Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself.30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change.31
Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service.32
However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred.33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.
Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:
The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls.34
Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.
Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards.35
Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.”36
These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.
At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm?37
The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.
Data access problems
The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development.38
According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China.39
But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.
Biases in data and algorithms
In some instances, certain AI systems are thought to have enabled discriminatory or biased practices.40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.”41
Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.”42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.
Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:
The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create.43
AI ethics and transparency
Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made.44
In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.”45 Enrollment choices can be expected to be very different when considerations of this sort come into play.
Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers.46
For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates.47
There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms.48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.
In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms.49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.
In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.
Improving data access
The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity.50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.
In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality.51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.
In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.
Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy.52 That helps people track movements in public interest and identify topics that galvanize the general public.
Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.
In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.
There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.
Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.”53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies.54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.
Increase government investment in AI
According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology.55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits.56
Promote digital education and workforce development
As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.
For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education.57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.
But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.
One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom.58 As such, they are precursors of new educational environments that need to be created.
Create a federal AI advisory committee
Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.
In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.”59
Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.
This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.
Engage with state and local officials
States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.”60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.”61
According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.
Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city.62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.
Regulate broad objectives more than specific algorithms
The European Union has taken a restrictive stance on these issues of data collection and analysis.63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected.64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.
The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’”65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.
By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.
If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.
It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.
Take biases seriously
Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.
For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.”66
Maintaining mechanisms for human oversight and control
Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.”67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.
In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness.68
A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account.69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.
Penalize malicious behavior and promote cybersecurity
As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends.70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions.71
In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security.72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.”73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.
To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.
The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.
Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions.74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.
Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.
Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.
John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.
Mumbai-Bangalore: SAP today said it has successfully implemented SAP S/4HANA Business suite at Larsen & Toubro Infotech (LTI) in five months time.
LTI is undertaking several initiatives as part of its digital transformation journey. To enhance its global operational excellence, the company needed a robust platform to enable innovation and improve agility. LTI selected SAP S/4HANA on Cloud as its platform.
“At LTI, we believe the change begins within. We have helped several global companies with their SAP S/4HANA programs and found it to be the right platform for our own digital transformation. To amplify business outcomes, we deployed LTI’s S/4HANA smart analyzer and Mosaic platform that many of our clients have also leveraged,” said,”Aftab Ullah, COO – LTI.
“Digital is opening up an infinite world of new possibilities for companies to reimagine their business models, the way they work, and how they compete. With SAP as the digital core, LTI will be powered by live information from across its operations, to help increase productivity, improve collaboration, and drive innovation,” said Deb Deep Sengupta, President & Managing Director, SAP Indian Sub-continent.
“In the current digital era, technology implementation as well as business process change is driven by greater agility, sharper insights and more responsive business models, which is delivered by next generation platforms like SAP S/4HANA,” said Kamolika Peres, Vice President and Head – Strategic Customer Program, SAP Indian Subcontinent.
“Our partnership with LTI is to optimize processes and enhance business excellence throughout the organization and for its customers. This is an extension of our mutual objective of enabling customers to unlock their digital potential,” added Peres.
LTI is also a global partner of SAP and together, the two companies are supporting digital journeys of their mutual clients.
CommonWell’s executive director said this latest step “breaks down another interoperability barrier”
Connection capabilities to the Carequality framework, by members of the CommonWell Health Alliance, are now “generally available,” according to officials who made an announcement today.
CommonWell, a trade association providing a vendor-neutral platform and interoperability services for its members, announced in August that it had started a limited roll-out of live bidirectional data sharing with an initial set of CommonWell members and providers and other Carequality Interoperability Framework adopters. This marked a key step in a collaborative effort to increase health IT connectivity across the country by enabling CommonWell subscribers to engage in health data exchange through directed queries with Carequality-enabled providers, and vice versa.
In just the first two weeks of a few CommonWell-enabled providers being connected, Jitin Asnaani, CommonWell Health Alliance executive director, said there were more than 4,000 documents bilaterally exchanged with Carequality-enabled providers.
Since then, by leveraging the technological infrastructure built by CommonWell service provider Change Healthcare, members Cerner and Greenway Health successfully completed a focused rollout of the connection with a handful of their provider clients, who have been exchanging data daily with Carequality-enabled providers, officials stated today.
Now, since the connection went live in July, officials noted that CommonWell-enabled providers have bilaterally exchanged more than 200,000 documents with Carequality-enabled providers locally and nationwide.
Micky Tripathi, President and Chief Executive Officer of the Massachusetts eHealth Collaborative, is one of the most well-informed and well-respected healthcare IT leaders in the U.S. Tripathi has…
“We are proud to break down yet another barrier to interoperability by making this much-anticipated connection available to our members and their clients,” Asnaani said in a statement today. “This increased connectivity will serve to empower providers with access to patient health data critical to their healthcare decision-making.”
In December 2016, CommonWell and Carequality, an initiative of The Sequoia Project, announced connectivity and collaboration efforts with the aim of providing additional health data sharing options for stakeholders. Officials said that the immediate focus of the work between Carequality and CommonWell would be on extending providers’ ability to request and retrieve medical records electronically from other providers. In the past two years, teams at both organizations have been working to establish that connectivity.
Together, CommonWell members and Carequality participants represent more than 90 percent of the acute EHR market and nearly 60 percent of the ambulatory EHR market. More than 15,000 hospitals, clinics, and other healthcare organizations have been actively deployed under the Carequality framework or CommonWell network.
Carequality is a national-level, consensus-built, common interoperability framework to enable exchange between and among health data sharing networks. It brings together electronic health record (EHR) vendors, record locator service (RLS) providers and other types of existing networks from the private sector and government, to determine technical and policy agreements to enable data to ﬂow between and among networks and platforms.
CommonWell Health Alliance operates a health data sharing network that enables interoperability using a suite of services aiming to simplify cross-vendor nationwide data exchange. Services include patient ID management, advanced record location, and query/retrieve broker services, allowing a single query to retrieve multiple records for a patient from member systems.
Following the August announcement of the limited bi-directional data sharing capabilities, Micky Tripathi, Ph.D., president and CEO of the Massachusetts eHealth Collaborative said, “This is the ‘golden spike’ moment, connecting the two big railroads, like when AT&T and Verizon finally got connected. This is building that bridge.” Tripathi, who also directly observes and participates in conversations with Carequality and CommonWell, added, “It will take a while for all of the production sites and different vendors to get up and running. That will probably take a couple of years. But you have to have the bridge to connect them to begin.”
One key element in this progression is that currently, EHR giant Epic is not a member of CommonWell, despite other major EHR vendors pushing Epic in that direction. “Because sharing among Epic customers is already universal, when CommonWell connects to Carequality, the entire Epic base will become available, creating instant value for most areas of the country,” a recent KLAS report on interoperability stated.
Interestingly, Tripathi noted in August that once there is “general availability” of the data sharing services for all Carequality and CommonWell members, the competition factor will become less important. “It makes both networks more valuable,” Tripathi said at the time.
It appears as if that “general availability” time has now come. “Thanks to the CommonWell-Carequality connection, our patients can have access to their medical records regardless of the EHR a health care facility uses,” said David Callecod, president and CEO of Lafayette General Health, a Cerner client located in Lafayette, La. “When data is made readily available, providers can make diagnostic and treatment decisions more quickly, and patients can recover sooner. Better data means better communication with our patients and providers, better care and better outcomes. This is a very powerful tool!”
Officials also noted that with the connection officially in production, additional CommonWell members, including Brightree, Evident and MEDITECH, are in the process of subscribing to the connection and taking it live with their provider clients.
In light of these mounting concerns, Pew Research Center and Elon University’s Imagining the Internet Center queried technology experts, scholars and health specialists on this question: Over the next decade, how will changes in digital life impact people’s overall well-being physically and mentally?
Humans need tools. Humans need and want augmentation. And as the saying goes ‘First we make our tools, then our tools form us.’
Some 1,150 experts responded in this non-scientific canvassing. Some 47% of these respondents predict that individuals’ well-being will be more helped than harmed by digital life in the next decade, while 32% say people’s well-being will be more harmed than helped. The remaining 21% predict there will not be much change in people’s well-being compared to now. (See the section titled “About this canvassing of experts” for further details about who these experts are and the structure of this canvassing sample.)
Many of those who argue that human well-being will be harmed also acknowledge that digital tools will continue to enhance various aspects of life. They also note there is no turning back. At the same time, hundreds of them suggested interventions in the coming years they feel could mitigate the problems and emphasize the benefits. Moreover, many of the hopeful respondents also agree that some harm will arise in the future, especially to those who are vulnerable.
Participants were asked to explain their answers, and most wrote detailed elaborations that provide insights about hopeful and concerning trends. They were allowed to respond anonymously and many did so; their written comments are also included in this report.
Three types of themes emerged: those tied to expert views that people will be more helped than harmed when it comes to well-being; those tied to potential harms; and those tied to remedies these experts proposed to mitigate foreseeable problems. The themes are outlined in the nearby table.
Themes about the future of well-being and digital life
MORE HELPED THAN HARMED
Digital life links people to people, knowledge, education and entertainment anywhere globally at any time in an affordable, nearly frictionless manner.
Commerce, government and society
Digital life revolutionizes civic, business, consumer and personal logistics, opening up a world of opportunity and options.
Digital life is essential to tapping into an ever-widening array of health, safety, and science resources, tools and services in real time.
Digital life empowers people to improve, advance or reinvent their lives, allowing them to self-actualize, meet soul mates and make a difference in the world.
Continuation toward quality
Emerging tools will continue to expand the quality and focus of digital life; the big-picture results will continue to be a plus overall for humanity.
MORE HARMED THAN HELPED
People’s cognitive capabilities will be challenged in multiple ways, including their capacity for analytical thinking, memory, focus, creativity, reflection and mental resilience.
Internet businesses are organized around dopamine-dosing tools designed to hook the public.
Personal agency will be reduced and emotions such as shock, fear, indignation and outrage will be further weaponized online, driving divisions and doubts.
Information overload + declines in trust and face-to-face skills + poor interface design = rises in stress, anxiety, depression, inactivity and sleeplessness.
The structure of the internet and pace of digital change invite ever-evolving threats to human interaction, security, democracy, jobs, privacy and more.
Societies can revise both tech arrangements and the structure of human institutions – including their composition, design, goals and processes.
Things can change by reconfiguring hardware and software to improve their human-centered performance and by exploiting tools like artificial intelligence (AI), virtual reality (VR), augmented reality (AR) and mixed reality (MR).
Governments and/or industries should create reforms through agreement on standards, guidelines, codes of conduct, and passage of laws and rules.
Redesign media literacy
Formally educate people of all ages about the impacts of digital life on well-being and the way tech systems function, as well as encourage appropriate, healthy uses.
Human-technology coevolution comes at a price; digital life in the 2000s is no different. People must gradually evolve and adjust to these changes.
Fated to fail
A share of respondents say all this may help somewhat, but – mostly due to human nature – it is unlikely that these responses will be effective enough.
PEW RESEARCH CENTER AND ELON UNIVERSITY’S IMAGINING THE INTERNET CENTER
These findings do not represent all the points of view that are possible in responding to a question like this, but they do reveal a wide range of valuable observations based on current trends. Here are some representative quotes from these experts on each of these themes.
The benefits of digital life
Connection:Daniel Weitzner, principal research scientist and founding director of MIT’s Internet Policy Research Initiative, explained, “Human beings want and need connection, and the internet is the ultimate connection machine. Whether on questions of politics, community affairs, science, education, romance or economic life, the internet does connect people with meaningful and rewarding information and relationships. … I have to feel confident that we can continue to gain fulfillment from these human connections.”
Commerce, government and society:Pete Cranston, a Europe-based trainer and consultant on digital technology and software applications, wrote, “There’s a top-1%, first-world response, which is to bemoan the impact of hyperconnectedness on things like social interaction, attention span, trolling and fake news – all of which are real but, like complaining about the marzipan being too thick on the Christmas cake, are problems that come with plenty and surplus. There’s a rest-of-the-world response which focuses more on the massive benefits to life from access to finance, to online shopping, to limitless, free research opportunities, to keeping in touch with loved ones in far-away places (and think migrant workers rather than gap-year youth).”
Human beings want and need connection, and the internet is the ultimate connection machine.
Crucial intelligence:Micah Altman, director of research and head scientist for the program on information science at MIT, said, “Most of the gains in human well-being (economic, health, longevity, life-satisfaction and a range of choices) over the last century and a half have come from advances in technology that are the long-term results of scientific advances. However, these gains have not been distributed equitably, even in democracies. Many advances from the fields of computer science, information science, statistics and computational social science are just beginning to be realized in today’s technology – and there remains a huge potential for long-term improvement. Further, since information is a non-consumptive good, it lends itself to broad and potentially more equitable distribution. For example, the relatively recent trends towards openness in scientific publication, scientific data and educational resources are likely to make people across the world better off – in the short term, by expanding individuals’ access to a broad set of useful information; in the medium term, by decreasing barriers to education (especially higher-ed); and in the long term by enhancing scientific progress.”
Contentment:Stephen Downes, a senior research officer at the National Research Council Canada, commented, “The internet will help rather than harm people’s well-being because it breaks down barriers and supports them in their ambitions and objectives. We see a lot of disruption today caused by this feature, as individuals and companies act out a number of their less desirable ambitions and objectives. Racism, intolerance, greed and criminality have always lurked beneath the surface, and it is no surprise to see them surface. But the vast majority of human ambitions and objectives are far more noble: people desire to educate themselves, people desire to communicate with others, people desire to share their experiences, people desire to create networks of enterprise, commerce and culture. All these are supported by digital technologies, and while they may not be as visible and disruptive as the less-desirable objectives, they are just as real and far more massive.”
Continuation toward quality: Paul Jones, professor of information science at the University of North Carolina, Chapel Hill, proposes that future artificial intelligence (AI) will do well at enhancing human well-being, writing, “Humans need tools. Humans need and want augmentation. And as the saying goes ‘First we make our tools, then our tools form us.’ Since the first protohuman, this has been true. But soon our tools will want, demand and create tools for their own use. The alienation of the industrial age has already given up the center stage to the twisted social psychology of the service industry. Next, will our tool-created overlords be more gentle and kind than the textile factory, the sewing room or the call center? I believe they will be.”
Concerns over harms
Digital deficits:Nicholas Carr, well-known author of numerous books and articles on technology and culture, wrote, “We now have a substantial body of empirical and experiential evidence on the personal effects of the internet, social media and smartphones. The news is not good. While there are certainly people who benefit from connectedness – those who have suffered social or physical isolation in the past, for instance – the evidence makes clear that, in general, the kind of constant, intrusive connectedness that now characterizes people’s lives has harmful cognitive and emotional consequences. Among other things, the research reveals a strong association, and likely a causal one, between heavy phone and internet use and losses of analytical and problem-solving skill, memory formation, contextual thinking, conversational depth and empathy as well as increases in anxiety.”
Digital addiction:David S.H. Rosenthal, retired chief scientist of the LOCKSS Program at Stanford University, said, “The digital economy is based upon competition to consume humans’ attention. This competition has existed for a long time (see Tim Wu’s ‘The Attention Merchants’), but the current generation of tools for consuming attention is far more effective than previous generations. Economies of scale and network effects have placed control of these tools in a very small number of exceptionally powerful companies. These companies are driven by the need to consume more and more of the available attention to maximize profit. This is already having malign effects on society (see the 2016 presidential election). Even if these companies wanted to empower less-malign effects, they have no idea how to, and doing so would certainly impair their bottom line. Thus these companies will consume more and more of the available attention by delivering whatever they can find to grab and hold attention. The most effective way to do this is to create fear in the reader, driving the trust level in society down (see Robert Putnam’s ‘Making Democracy Work’ for the ills of a low-trust society).”
Keeping people in a continual state of anxiety, anger, fear, or just haunted by an inescapable, nagging sense that everyone else is better off than they are can be very profitable.
Digital distrust/divisiveness: Judith Donath, author of “The Social Machine, Designs for Living Online,” commented, “If your objective is to get people to buy more stuff, you do not want a population of people who look at what they have and at the friends and family surrounding them, and think to themselves ‘life is good, I appreciate what I have, and what I have is enough.’ If your goal is to manipulate people, to keep a population anxious and fearful so that they will seek a powerful, authoritarian leader – you will not want technologies and products that provide people with a strong sense of calm and well-being. Keeping people in a continual state of anxiety, anger, fear, or just haunted by an inescapable, nagging sense that everyone else is better off than they are can be very profitable. In short, the individual researchers and developers may be motivated by a sincere desire to advance understanding of mood, cognition, etc., or to create technologies that nudge or control our responses for our own good, but the actual implementation of these techniques and devices is likely to be quite different – to be used to reduce well-being because a population in a state of fear and anxiety is a far more malleable and profitable population.”
Digital duress: Jason Hong, professor at the Human Computer Interaction Institute at Carnegie Mellon University, wrote, “Many years ago, the famed Nobel laureate Herb Simon pointed out that ‘Information consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.’ Simon presciently pointed this out in 1971. However, back then, the challenge was information overload. Today, we now also have organizations that are actively vying for our attention, distracting us with smartphone notifications, highly personalized news, addictive games, Buzzfeed-style headlines and fake news. These organizations also have a strong incentive to optimize their interaction loops, drawing on techniques from psychology and mass A/B testing to draw us in. Most of the time it’s to increase click-through rates, daily active users and other engagement metrics, and ultimately to increase revenues. There are two major problems with these kinds of interactions. The first is just feeling stressed all the time, due to a constant stream of interruptions combined with fear of missing out. The second, and far more important, is that engagement with this kind of content means that we are spending less time building and maintaining relationships with actual people. Having good friends [has the] equivalent [health effects] of quitting smoking, and today’s platforms are unintentionally designed to isolate us rather than helping us build strong relationships with others.”
Digital dangers:Tiziana Dearing, a professor at the Boston College School of Social Work, said, “People’s well-being will be affected for the worse by digital technology for three reasons. 1) Because we have evolved as interpersonal, social creatures and therefore are unable to adapt to the behaviors, needs, even maybe the wiring required to thrive socioemotionally and physically in a digital world at the pace that digital change will require. 2) Because digital technology – from design to algorithms – has evolved without sufficient consideration of social empathy and inherent bias. 3) Because we have not figured out how to mitigate the ability that certain forms of technology have created to be our worst selves with each other. Don’t get me wrong. Technological developments hold tremendous potential to cure disease, solve massive human problems, level the information playing field, etc. But our ability to adapt at a species level happens on a much slower cycle, and our human behaviors get in the way.”
Intervention ideas to ease problems
Reimagine systems:Sherry Turkle one of the world’s foremost researchers into human-computer interaction and professor at MIT, shared the following action steps: “1) Working with companies in terms of design – [these tools] should not be designed to engage people in the manner of slot machines. 2) [There should be] a movement on every level to make software transparent. This is a large-scale societal goal! 3) Working with companies to collaborate with consumer groups to end practices that are not in the best interests of the commons or of personal integrity. 4) A fundamental revisiting of the question of who owns your information. 5) A fundamental revisiting of the current practices that any kind of advertisement can be placed online (for example ads that are against legal norms, such as ageist, sexist, racist ads). 6) Far more regulation of political ads online. 7) An admission from online companies that they are not ‘just passive internet services.’ 8) Finding ways to work with them so that they are willing to accept that they can make a great deal of money even if they accept to be called what they are! This is the greatest business, political, and social and economic challenge of our time, simply learning to call what we have created what it really is and then regulate and manage it accordingly, bring it into the polity in the place it should really have.”
Reinvent tech:Susan Price, lead experience strategist at USAA, commented, “We can use human-centered technology design to improve our experiences and outcomes, to better serve us. I have a vision for a human API that allows us to moderate and throttle what occupies our attention – guided by principles and rules in each user’s direct control, with a model and framework that prioritizes and categorizes content as it reaches our awareness – to reduce effort and cognitive load in line with our own expressed goals and objectives. Today we cede that power to an array [of] commercial vendors and providers.”
Regulate:Dana Chisnell, co-director of the Center for Civic Design, wrote, “There are dozens of projects happening to try to make the internet a better place, but it’s an arms race. As individuals find tools for coping and managing their digital lives, technology companies will find new, invasive ways to exploit data generated on the internet in social media. And there will be more threats from more kinds of bad actors. Security and privacy will become a larger concern and people will feel more powerless in the face of technology that they don’t or can’t control. And it will take many years to understand how to negotiate that race and come to some kind of detente.”
People are adaptive. In the long run, we are reasonable, too. We will learn how to reign in the pitfalls, threats, bad guys and ill-meaning uses. These will continue to show up, but the march is towards progress.
Redesign media literacy:Alex Halavais, director of the M.A. in social technologies program at Arizona State University, said, “The primary change needs to come in education. From a very early age, people need to understand how to interact with networked, digital technologies. They need to learn how to use social media, and learn how not to be used by it. They need to understand how to assemble reliable information and how to detect crap. They need to be able to shape the media they are immersed in. They need to be aware of how algorithms and marketing – and the companies, governments, and other organizations that produce them – help to shape the ways in which they see the world. Unfortunately, from preschool to grad school, there isn’t a lot of consensus about how this is to be achieved.”
Recalibrate expectations:Sheizaf Rafaeli, a professor at the University of Haifa in Israel, wrote, “People are adaptive. In the long run, we are reasonable, too. We will learn how to reign in the pitfalls, threats, bad guys and ill-meaning uses. These will continue to show up, but the march is towards progress. Better, more meaningful lives. Healthier, more-supportive environments. It is a learning process, and some of us, sometimes, get an ‘F’ here or there. But we learn. And with digital tech, we learn faster. We converse and communicate and acknowledge each other like never before. And that is always a good start. Bad things, like greed, hate, violence, oppression will not be eradicated. But the digital is already carrying, delivering and instantiating much promise. This is not rosy-colored utopian wishful thinking. It is a realistic take on the net effects. I would rather trade places with my grandkids than with my grandparents.”
Fated to fail:Douglas Massey, a professor of sociology and public affairs at Princeton University, responded to say that interventions are not likely to be possible. He wrote, “I am not very optimistic that democratically elected governments will be able to regulate the internet and social media in ways that benefit the many rather than the few, given the vast amounts of money and power that are at stake and outside the control of any single government, and intergovernmental organizations are too weak at this point to have any hope of influence. The Trump administration’s repeal of net neutrality is certainly not a good sign.”
More and more women are getting computer science and electrical engineering degrees from the Bay Area’s two elite universities, a goal U.S. colleges have been pursuing for decades. But in the midst of the #MeToo era’s focus on sexual midsconduct, harassment and gender discrimination in tech, some of these young women say they’re worried about what their future workplace holds.
“Even though it’s not very apparent at Stanford, I think we all know that it’s a problem within the industry,” said Monica Anuforo, a junior pursuing a computer science degree. “I’m not very intimidating. I’m pretty small. It’s super easy for me to be ignored or for things I say to be written off, and I’m worried about that happening.
“I’m worried about it, but not enough to deter me,” said Anuforo, who became interested in computer science after taking a high school computing class “on a whim” because she was good at math and logic.
Since 2010, Stanford has steadily driven up the proportion of undergraduate women receiving degrees in computer science and electrical engineering from 11 percent to a record 31 percent in 2017, according to university data. UC Berkeley has doubled the percentage of women receiving those degrees during the same period, from 11 percent in 2010 to 22 percent in 2017, school data shows. That runs counter to a national trend, in which the proportion of women receiving degrees in computer and information sciences has dropped from a high of 37 percent in 1984 to about 18 percent in 2016, according to the U.S. Department of Education.
Stanford and Berkeley’s long-sought gains have come in the midst of a growing and heated debate over technology’s male-dominated culture. Ever since electrical engineer, lawyer and Harvard MBA Ellen Pao in 2012 launched an unsuccessful lawsuit alleging gender discrimination by VC firm Kleiner Perkins, claims and admissions of male misconduct in the region’s tech industry have followed one after another — from engineer Susan Fowler’s sexual harassment allegations against Uber to tech investor Dave McLure’s “I’m a creep. I’m sorry” apology for making inappropriate advances to women. Meanwhile, Google faces a lawsuit and federal investigation over whether it has paid women less than men and Uber last month agreed to settle a discrimination suit brought by hundreds of women and minority software engineers.
The issues haven’t escaped the attention of female students at Berkeley aiming for careers in computer science or electrical engineering, said computer science professor John DeNero.
“It comes up even on the first day of class,” he said. “The students are very keen to talk about it, understand it. They really want to know, ‘Are all companies the same? Is this something I’m going to see everywhere?’”
Those worries are exacerbated by some of the news from Silicon Valley, like the highly publicized memo written by former Google software engineer James Damore, who argued that women may be less biologically suited for tech jobs than men, said Tammy Nguyen, a senior computer science major at UC Berkeley.
“Those things are super discouraging,” said Nguyen, who taught herself to code in high school so she could create her own “themes” on the micro-blogging site, Tumblr. “Looking at that, I don’t even want to continue — why would I continue in a field where the people I’m working with think that I’m incapable?”
But Nguyen, whose confidence in her ability to excel in the field has grown as her studies have progressed, said she’s undeterred. She’s already weighing a software-engineering job offer from a major Silicon Valley company and plans to work in the tech industry, where workforces are, according to the National Center for Women & Information Technology, three-quarters male.
Universities across the United States have been working for years to solve the industry’s “pipeline problem,” trying to both attract and graduate more female students to computer science and engineering. It’s a problem that begins early.
Google, in a 2014 research paper, reported that for girls and young women, most decision-making about whether to seek a computer science degree occurs before college. Last year, only 23 percent of U.S. high school students taking the advanced-placement test for computer science were female, according to The National Center for Women & IT.
“Subtle or not-so-subtle effects have just led us to a world where male students tend to get more computer experience before they get to college,” said Berkeley’s DeNero, noting that computer scientists are usually depicted as male and boys are typically introduced to computing and computer games much earlier than girls.
To solve that, Berkeley and Stanford took a number of different steps and one similar approach: They changed their introductory computer science classes to attract students with varying experience levels.
Berkeley added introductory data science and computer science courses specifically aimed at students with no prior programming experience.In 2011, De Nero said, the university also launched a computer science “kick start” program, bringing 30 female, first-year students to campus a week before classes started for an “intensive introduction to computer science.”
In addition, the school redesigned its mandatory but fast-paced introductory course for computer science and electrical engineering majors to make it more accessible to students with little or no programming experience, including the creation this year of different sections for different experience levels, DeNero said.
“We have invested a lot of time and energy in figuring out what our introductory curriculum should look like, how we teach our courses, and in particular what kind of support mechanisms can we put in place to make sure that somebody who wants to study computer science has a good chance of being successful,” he said.
Even with those changes, some students are intimidated at the prospect of entering a class with less experience than others. “We’re still in the process of figuring out how to address that,” DeNero said.
At Stanford, the university also created different introductory computer science courses for students with different levels of computing experience, according to computer science professor and former Google research scientist Mehran Sahami. Word of the changes got around.
“What we found was that over time the class could build up a reputation in terms of being inclusive,” said Sahami, who added that women-led clubs and student groups have also made computer science at Stanford more attractive to female students.
To send a message that computing skills have applications beyond traditional tech jobs, and to broaden students’ career opportunities, Stanford created 10 study tracks for computer science majors, with choices including computational biology, he said.
That move was made not only to draw more women but also to “make computer science more exciting for everyone,” he said.
Steve Blank, a retired entrepreneur and startup guru who teaches at both schools, said hiring managers in Silicon Valley would do well to see the growing number of female graduates of Berkeley and Stanford’s tech programs as a key to business development.
“It’s not just about engineering,’” he said. “It’s about understanding customer needs and desires. My experience is that women actually do this better than men, maybe because they listen better.”
Despite the challenges she may face, Nguyen is looking forward to putting her education to work. “I really hope they see me as an equal,” she said. “I really hope they can see me for my skills.”
You only have to glance at today’s headlines to know that cybersecurity has surged to the top of the list of U.S. national security concerns. The good news is that more than a dozen states are engaged in a fierce competition to head the emerging leaderboard in cybersecurity facilities and programs. What the best initiatives have in common is close cooperation between U.S. Defense and intelligence assets and world-class institutions of higher education.
Cybersecurity is quickly becoming the most lucrative of careers in IT, but the red-hot demand for certified IT security professionals is already creating a nationwide shortage. According to industry experts, currently there are roughly half a million cybersecurity-related job openings in the United States, but the gap between available workforce and demand is growing exponentially. By 2019, there will be 6 million job openings for information security professionals—but only 4.5 million security professionals to fill those roles. Industry analysts say there will be a projected need for 1.8 million additional cybersecurity professionals to fill the workforce gap by 2022.
According to the Bureau of Labor Statistics, the annual rate of growth for jobs in information security is projected at 37 percent between now and 2022; IT security jobs offer salaries three times the national average.
Here’s an in-depth look at locations that have established a leadership position in cybersecurity, an emerging high-tech sector racing to meet a growing threat.
GEORGIA: RED HOT CYBER HUB
Georgia has become a hotbed for cybersecurity activity, ranking third in the nation for information security with companies in this sector generating more than $4.7 billion in annual revenue.
More than 14,300 tech companies and 115 information security companies, such as Check Point Software Technologies, Dell Securworks and IBM Security Services, are located in the Peach State. They benefit from a talent pool of nationally ranked cyber institutes such as Augusta University, Columbus State University, Georgia Institute of Technology (Georgia Tech) and University of Georgia.
Last April, Fortune magazine named both Atlanta and Augusta as two of the “7 Cities That Could Become the World’s Cybersecurity Capital.”
Atlanta is home to Georgia Tech, a vibrant corporate and funding eco-system, and local champions like Tom Noonan who has been dubbed as the “godfather of Atlanta’s cyber scene.” The city also is home to companies like Pindrop, a fast-growing startup up which uses sophisticated analytics to root out phone fraud.
Augusta has risen up in cybersecurity ranks in recent years, with a new arsenal of initiatives aimed at further solidifying its status as a cybersecurity hub.
The state is investing $93 million for a world-class cyber range and training facility in the city’s downtown. A portion of Augusta University’s Riverfront Campus will become the Georgia Cyber Innovation and Training Center, the state’s centerpiece for cybersecurity research and development.
Earlier this year, Governor Nathan Deal announced $58 million for the creation of the training center, then announced in November 2017 another $35 million to expand the facility to include an incubator hub for technology startups as well as a training space for the state’s cybersecurity initiatives and workforce development programs.
“Given Georgia’s growing status as a technology and innovation hub, this additional investment will further cement our reputation as the ‘Silicon Valley of the South,’ said Deal. “When complete, the center will house a cyber range, the Georgia Bureau of Investigation’s new cybercrime unit and an incubator for startup cybersecurity companies.”
The new center will allow technology companies to establish fellowships, internships and co-op program opportunities for students and employees. It will also serve as a training facility for information security professionals employed by state and local governments.
“Cybersecurity technology is changing at a disruptive speed and today, that rate of change is likely the slowest it will be in our lifetime,” said Deal. “This visionary approach to cybersecurity underscores our commitment to encouraging innovation and developing a deep talent pool ready to establish Georgia as the safest state in the nation for today’s leaders in technology.”
The Georgia Technology Authority (GTA) is overseeing construction and operation of the cybersecurity center facilities. GTA partners include the U.S. Army Cyber Center of Excellence (ARCYBER) at Fort Gordon, the Georgia National Guard, the Georgia Bureau of Investigation, the City of Augusta, the University System of Georgia, the Technical College System of Georgia, local school systems and private corporations.
The first phase of the Hull McKnight Georgia Cyber Innovation and Training Center is scheduled to open on July 10, 2018, and the second building is planned for completion in December 2018.
Augusta’s cybersecurity assets have helped to attract companies such as global tech giant Unisys, a $3.5-billion global provider of information technology services. The company has opened its newest North American client service center in Augusta citing the vibrant local economy, state and local leadership, and a well-educated, smart workforce as a few of the reasons it located there.
“Augusta already plays a key role as the headquarters of our growing security business,” said Dale Dye, regional site executive director for Unisys. “Tom Patterson, Unisys Chief Trust Officer, is based in Augusta. We see potential for an expanded role for Augusta in our security business.”
Unisys’ new center provides services to the U.S. Army, which recently selected Unisys for the Army Enterprise Service Desk, a single point of contact for Army personnel who need help desk or other end user IT support services. Unisys located in Augusta because of its proximity to Fort Gordon, headquarters of the Army Cyber Center of Excellence (ARCYBER).
“Fort Gordon is a source of veterans and military spouses knowledgeable in IT who can easily adapt to IT service and management roles,” said Dye. “We can help with the transition to civilian life by offering competitive and sustainable jobs that help keep a highly desirable employee population within Georgia.”
About 4,700 high-tech military personnel will be relocated to Fort Gordon as part of the growth within military intelligence, cyber and the National Security Agency, with the Army spending an estimated $2.1 billion in facility improvements by 2020.
Unisys plans to increase the number of its employees in downtown August from approximately 300 to 700 within the next two years to keep pace with ARCYBER’s demand for cyber professionals.
Georgia is also reshaping its statewide education system to strengthen technology curriculums and increase the cybersecurity talent pipeline in the area. Georgia Tech is home to 12 labs and centers dedicated to cybersecurity, with nearly 500 scientists, faculty and students involved with cybersecurity research, as well as the Advanced Technology Development Center (ATDC), the state’s technology incubator, helping organically grow companies throughout the state.
In March 2018, the institute announced eight technology startups that will go through an early-stage venture fund creating by Georgia Tech and 10 leading global corporations. The Engage Ventures growth program differs from other accelerators in that it targets later stage companies and helps them develop and execute go-to-market strategies.
“Bringing top executives from our corporate partners together with founders of high-growth companies to focus on go-to-market has unlocked immense value,” said Thiago Olson, managing director of Engage Ventures. “We’re humbled by the talent pulled together this spring.”
Companies in the spring cohort are pioneering technologies spanning from autonomous flight to artificial intelligence to blockchain. Half of the startups have Georgia Tech connections. Three are current portfolio companies in ATDC and a fourth participated in CREATE-X, a series of entrepreneurship programs for undergraduate students.
ONTARIO: EMERGING CYBER LEADER
Over the past few years, Ontario has seen steady growth in the number of cybersecurity companies setting up shop in Toronto, Kitchener Waterloo and the Ottawa region. The province is home to more than 90 small-medium sized companies focused on cybersecurity and also is home to large multinationals such as IBM, Symantec, McAfee, TrendMicro, Cisco and Checkpoint, who have cybersecurity offices across the province.
“In an economy that is becoming increasingly digital, our personal and commercial privacy is of utmost importance,” said Stephen Del Duca, Ontario Minister of Economic Development and Growth. “Ontario is an emerging leader in the world of cybersecurity, with a number of home grown companies producing leading-edge technology to ward off cyber threats. We’re committed to developing the next generation of highly skilled talent to address the needs of an ever-changing digital economy.”
eSentire is among Ontario’s fast-growing group of cybersecurity companies—a hub of innovation that’s gaining recognition for advanced solutions to today’s complex data protection challenges.
“Where other cybersecurity companies might rely on tools such as firewalls and anti-virus programs to detect and block hackers, eSentire takes proprietary, leading-edge detection and prevention technology and combines it with live security analysts who assess and resolve attacks,” said Trevor Dauphinee, Vice President, Strategic Accounts, Ontario Investment Office. “This approach has earned eSentire a great deal of respect in the industry.”
Toronto’s SecureKey Technologies is using innovative solutions to simplify consumer access to online services and applications by letting them use their digital credentials with trusted providers.
For example, Canadians who want to check the status of government benefits such as pension or employment insurance can choose SecureKey’s Concierge Service solution to sign in to the Government of Canada website through their bank. SecureKey is in the process of launching a new service that makes it simpler for consumers to share their identity and their other data in a secure and private manner.
“Ontario is a natural choice for both small and large cyber-security firms,” said Dauphinee. “We have one of the world’s soundest banking systems, where companies can maximize profit and minimize their risk.”
To meet the needs of industry, Ontario’s post-secondary institutions are ramping up cybersecurity-focused programming and curriculum.
Ontario’s 44 colleges, universities and private career colleges produce 40,000 STEM graduates, and 44,000 business graduates a year. In fact, the province has committed to boosting the number of STEM graduates per year to 50,000 over the next five years, with a special focus on artificial intelligence (AI). Many cybersecurity companies are using AI algorithms to help prepare professionals for cyber-attacks and safeguarding the enterprise.
Ontario is taking action to protect its cyber systems and financial institutions from digital attacks through a one-year $4-million pilot project.
Under the pilot project, financial institutions will be linked with Ontario start-ups and SMEs to develop and facilitate the adoption of technological solutions. The pilot project will build on the momentum of Ontario’s existing cybersecurity strength and support the development and adoption of leading cybersecurity technologies in the financial sector.
“The project will also support the creation of high-tech jobs and a specialized talent pool in cybersecurity to allow the province to develop the next generation of highly skilled talent to address the needs of a digital economy,” said Dauphinee.
Ontario also is a hub of leading-edge risk management thinking, applied research, education and training through the Global Risk Institute in Financial Services, a collaboration of regulators, risk experts, academics, policy makers and practitioners.
Global financial leaders have recognized the competitive advantages they can gain by relocating or expanding their operations in Ontario, including access to major cities in the U.S. With the implementation of CETA, Ontario has free trade agreements with more than 50 countries, including every G7 country.
Based on venture capital dollars invested in cybersecurity, Canada ranks fourth in the world, with Ontario leading the country as a major global hub of cybersecurity innovation after only the U.S., Israel and the U.K. Ontario alone has more than $250 million in venture capital funding for its cybersecurity small-medium sized enterprises between 2011 and 2015. According to a Deloitte report done in cooperation with the TFSA, cybersecurity spending in Canada exceeded $2 billion in 2016.
CYBER INNOVATION CENTER IS A NATIONAL LEADER IN LOUISIANA
Over the past decade, Louisiana’s traditional oil and gas, agribusiness, and chemical industry has pivoted to a technology-driven, knowledge-based economy with a top-notch cybersecurity ecosystem. Its fast-growing software and IT sector includes high profile companies such as EA, CenturyLink, IBM, CGI, GE Digital and CSRA.
The $107 million Cyber Innovation Center (CIC) has played a critical role in helping Louisiana establish a fully integrated national hub for cybersecurity. CIC is a business technology accelerator and anchor of the 3,000-acre National Cyber Research Park (NCRP) in Shreveport-Bossier City.
The initial investment by the parish and the state established the CIC as the anchor of the NCRP and commissioned as a catalyst for the development and expansion of a knowledge-based workforce that can support the growing needs of government, industry and academia regionally within the state.
The CIC and its partners changed the landscape by developing a skilled pipeline tailored to multiple levels of learner. This multi-faceted approach has yielded a sustainable pool of talent that is entering the cyber-workforce.
“Our academic outreach program was one of the first projects we put together,” said Craig Spohn, Executive Director of CIC. “We needed to solve the question around what workforce was going fill the new cyber demands in Louisiana. Now, we can proudly say we have produced an organically grown, sustainable, systemic workforce in the state and around the country that will continue to regenerate itself.”
CIC has attracted such tenants as Lockheed Martin, Boeing and Northrop Grumman as well as Fortune 500 CSRA’s 800-job Integrated Technology Center (ITC). With CenturyLink’s Fortune 500 headquarters in Monroe, the research park anchors the I-20 Cyber Corridor.
CSRA, a $5 billion company with over 18,000 employees across the globe, has partnered with Louisiana to create 800 professional technology careers by 2018. As a trusted partner to the federal government, CSRA envisioned a state-of-the-art IT facility designed exclusively to help solve the federal government’s most difficult technology challenges and combat the continual threat of cyber terrorism.
“This facility is a key differentiator for CSRA and helps position us as forward-thinkers and trendsetters in the federal IT industry,” said Larry Prior, CSRA President and CEO. CSRA evaluated over 130 locations across the country before choosing Northwest Louisiana as the location for its new tech center.
“With a set of higher education partners able to provide high-quality technical education, a large technical workforce within 200 miles and an attractive set of state incentives, all geared at mutual success, the Shreveport-Bossier City region has been an ideal location for CSRA and the ITC,” said Mimi Hedgcock, Senior Principal, External Affairs, ITC, CSRA.
Nearby Barksdale Air Force Base and its Global Strike Command serve as an ideal workforce pipeline for CSRA and its federal government customers.
“CSRA aligns with the mission and development at CIC’s National Cyber Research Park, providing a synergy to attract businesses and talent focused on high-tech innovation and research, allowing government, industry and academia to collaborate, innovate and develop state-of-the-art technology,” said Hedgcock.
Much of Louisiana’s high-tech action takes place along the “I-20 Cyber Corridor” providing access to a high caliber of students from local higher education institutions such as Louisiana Tech University, which created the first four-year degree in Cyber Engineering in the nation.
The state also has invested more than $68 million directly in higher education technology initiatives.
“The goal is to rapidly increase the number of college graduates in computer science, cyber engineering, electrical engineering and related STEM curricula,” said Don Pierson, Secretary of Louisiana Economic Development (LED).
LSU’s Transformational Technologies & Cyber Research Center (TTCRC) at the LSU Innovation Park in Baton Rouge has attracted over $3 million in cybersecurity-related applied research funding this year, to go with a backlog of more than $8 million in cybersecurity research.
The center, which is housed within the university’s wholly owned nonprofit research and development enterprise, the Stephenson Technologies Corp., is undertaking landmark research in the petrochemical, medical and maritime sectors to protect vital industries from cyber threats.
Stephenson Technologies Corp. President Jeff Moulton, who oversees the TTCRC, has been named by Governor John Bel Edwards to a newly created Louisiana Cybersecurity Commission that has each of these threats and solutions on its radar.
“LSU is the flagship university of the state,” said Moulton. “We harness the research capability, employ and train interns, hire the top graduates, and leverage LSU’s intellectual property. We have transformed multiple LSU research facilities and built cyber ‘living labs’ to serve our customers.”
Current research ranges from missile defense security, to protecting the electric grid from cyberattacks, to comprehensive visualization software for protecting U.S. borders and the ring of Caribbean Sea nations.
Radiance Technologies, Inc., a Huntsville-based cybersecurity company has opened an applied research office co-located with TTCRC. In addition to Radiance, LSU/TTCRS has been approached by at least half-a-dozen companies about locating on/near campus to work closely with TTCRC.
“I fully anticipate a couple of these becoming site selection projects (of the LED flavor), attracted by the work we’re doing and the way we’re doing it, and catalyzed by LED’s incentives,” said Greg Trahan, Director of Economic Development, Office of Research & Economic Development Program Manager at TTCRC. “The companies have ranged from small firms with powerful specific capability to large firms looking to put a domain-focused R&D center here. We’re doing great work on projects that matter, and that’s drawing some attention.”
CYBERSECURITY IS BOOMING IN VA
Cybersecurity is booming in Virginia. The state is home to the most cybersecurity companies per capita in the nation and 22 percent of Virginia’s workforce is employed in high-tech industries. This is the highest concentration of any of the 50 states and is only exceeded by the District of Columbia’s 26 percent concentration.
Virginia IT companies, from startups to large systems integrators, are leaders in the development of cybersecurity solutions for industry and government. The industry has a direct output of $41.5 billion and supports an additional output of $33.8 billion.
With the constant advancement of IT infrastructure security, Virginia-based companies are at the forefront of technologies such as cryptography, forensics, intrusion detection and firewall devices.
“Virginia’s cybersecurity ecosystem is built upon its central position in the nation’s security and communication infrastructure,” said Stephen Moret, President and CEO of Virginia Economic Development Partnership (VEDP).
Virginia is part of the nation’s Cyber Capital, the Washington D.C. region. As the hub of leading-edge intelligence technology, the region serves as a fertile ground for the growing cybersecurity industry. 37 of the Washington Technology Top 100 federal contracting companies are headquartered in Virginia and 36 Virginia companies appear on the Cybersecurity 500 list, a list by Cybersecurity Ventures of the world’s hottest and most innovative cybersecurity companies.
In September 2017, Thomson Reuters announced it is locating its internal cybersecurity operations center in downtown Richmond, creating up to 60 jobs in highly specialized roles. Thomson Reuters is the world’s leading source of news and information for professional markets.
The company will occupy about 10,000 square feet at Riverfont Plaza and plans to employ up to 60 people by the end of 2018.
In a press statement, Tim McKnight, chief information security officer for Thomson Reuters, said the new “cyber fusion” center would be an important element of the company’s enhanced information security program.
“Establishing a presence in Richmond provides us excellent access to talent and cyber-related resources from the nearby academic, research and military communities,” said McKnight. “We look forward to being a good corporate citizen in the Richmond community.”
Besides its labor force, Richmond’s low operational cost factored into Reuter’s decision, according to the Greater Richmond Partnership. “We had the exact real estate that Thomson Reuters was looking for with an ideal talent pipeline so the project moved very quickly,” said Barry Matherly, president and CEO of the Partnership. “We hope to be able to attract more of the company’s operations in the future.”
In June 2016, Frontier Secure announced it is creating a cybersecurity customer care center in Wise County, creating 500 new jobs. Frontier Secure is a division of Frontier Communications Corporation, a leader in providing communications services to urban, suburban and rural communities in 29 states.
“Wise County offers the talent we depend on to drive our 100 percent U.S.-based operations, and we are proud to play a role in supporting economic growth in Southwest Virginia,” said Kelly Morgan, Senior Vice President & General Manager of Frontier Secure. “This expansion will provide us the additional capacity that we need to service our growing strategic partnership business.”
A partnership of state officials, higher education institutions and private-sector companies are now working on accelerator programs to advance the cybersecurity industry and train Virginia’s workforce for cyber opportunities.
“Our priority is to grow this industry in every region of the Commonwealth, with a particular focus on identifying cyber opportunities in more rural regions of the state,” said Moret. “Our main priority is STEM education starting at K-12 level, higher education programs to train the cyber workforce of the future and accelerator programs.”
The University of Virginia’s College at Wise in Southwest Virginia has partnered with Southwest Virginia Community College and Mountain Empire Community College in the Southwest Virginia Regional Cybersecurity Initiative.
“The College’s software engineering program, the only undergraduate software engineering program in Virginia, will accommodate the growing cybersecurity industry,” said Moret. “UVa-Wise will also expand its computer science and software engineering programs to include more faculty with cybersecurity expertise, and expand coursework in information security.”
In 2017, Marymount University, Virginia Tech, Liberty University and New River Community College were all hosts for GenCyber; a weeklong summer camp which teaches middle and high school students basic cybersecurity skills and encourages interest in cybersecurity careers.
Through Virginia’s Cyber Veterans Initiative, men and women who have served gain access to training, apprenticeship and job opportunities in the cybersecurity field.
“Because of Virginia’s significant military presence, thousands of veterans leave the military and remain in Virginia each year, contributing to a robust workforce,” said Moret.
Business accelerators include the Virginia Cyber Security Commission, Center for Innovative Technology (CIT), Northern Virginia Technology Council (NVTC) and the MACH37™ Cybersecurity Accelerator, an intense 90-day program in which select startups are mentored by cybersecurity entrepreneurs and receive a $50K investment to develop, test and pitch their ideas to security market investors. Since 2013, the MACH37 Cyber Accelerator at the Center for Innovative Technology has launched 40 cybersecurity companies.
MARYLAND: FEDERAL SECURITY HUB
Maryland has emerged as a leading cybersecurity location in recent years. It is home to renowned research centers, as well as business incubators and startup studios that provide expertise, space and capital to entrepreneurs. It’s also close to many of the federal government’s top agencies that play critical roles in protecting the nation’s physical and digital infrastructure—the National Security Agency, U.S. Cyber Command and CECOM, among others.
“Entrepreneurs coming out of those agencies adapt technology developed in federal agencies and bring that technology to the commercial market here in Maryland,” said Ken McCreedy, senior director for cybersecurity and aerospace at the Maryland Department of Commerce.
The state boasts more doctoral scientists and engineers than anywhere in the nation, as well as an industry-related workforce more than 115,000 strong. Its higher-ed institutions include 16 National Centers of Academic Excellence in Cyber, designated by the NSA and Department of Homeland Security.
Maryland also offers tax credits and other incentive programs to help cyber companies grow and thrive such as the Cybersecurity Investment Incentive Tax Credit, the Employer Security Clearances Costs Tax Credit and the TEDCO Seed Investment Funds. “In short, Maryland’s network of talent, technology and research—coupled with an established industry base and a culture of entrepreneurship—make our state a cybersecurity powerhouse,” said McCreedy.
Leveraging the military and federal cyber-related agencies in the state, Maryland launched the first statewide cybersecurity initiative. Partnering with the National Institute of Standards and Technology (NIST), the state also established the National Cyber Center of Excellence in Rockville to work with business to identify best practices across all critical sectors.
The Maryland Cybersecurity Council, created by Gov. Larry Hogan and authorized by the Maryland General Assembly in May 2015, works with NIST and other federal agencies to improve cybersecurity standards in MD.
“The council is reviewing laws and policies to ensure that Maryland is preparing the state and its businesses against cyber attacks,” said McCreedy. “Governor Hogan has made securing our data a priority of his administration, tasking his Office of Homeland Security with developing a comprehensive strategy.”
Maryland is now home to more than 30 business incubators and accelerators working to promote tech and cybersecurity growth in the state. These include Accelerate Baltimore, an initiative of the ETC (Emerging Technology Centers), which provides seed capital, resources, mentors, potential partners and a collaborative community to entrepreneurs; [email protected] Cyber Incubator, which offers business and technical support to startup and early-stage cyber and IT companies; and DataTribe, supporting emerging companies with capital as well as in-kind services such as legal, financial, product management and marketing assistance.
Cyber companies located in Maryland have seen success and significant growth. For example, Tenable, located in Columbia, is one of the nation’s fastest-growing cybersecurity software companies. The company is planning a major expansion at its new corporate headquarters and will add hundreds of full-time employees over the next few years.
“We see this growth in companies across our state, and we anticipate the trend to continue,” said McCreedy.
Bandura Systems, also headquartered in Columbia, recently closed on $3.5 million in seed funding from Blu Venture Investors, Gula Tech Adventures, and TEDCO. The company plans to increase sales and development of its Threat Intelligence Gateway product.
BlueVoyant recently announced plans to establish a Global Cyber Analytics Center at the University of Maryland, College Park. The company will employ 25 highly-skilled analysts and data scientists, with plans to add even more team members over the next few months.
And ZeroFOX, founded in Baltimore in 2013, secured $40 million in venture capital funding in 2017. The company now employs more than 100 people.
With cyber threats escalating every day, there is increasing demand for talented workers with up-to-date skills across the state. In addition, the federal government’s need for cyber workers has accelerated as agencies like the National Security Agency (NSA) and the U.S. Cyber Command at Fort George G. Meade continue to grow.
“Defense contractors supporting the federal government’s cyber agencies face similar challenges,” said McCreedy. “On the commercial side, there is high demand for cyber warriors, given Maryland’s success in attracting new business to the state, in supporting entrepreneurs creating companies here and in helping existing cyber companies grow and thrive in our state.”
As a result, Maryland’s colleges and universities have ramped up their cybersecurity-related curriculum and training to fuel the workforce pipeline.
The University of Maryland College Park (UMCP) recently received a $5 million grant from the National Science Foundation’s CyberCorps Scholarship for Service program, to fund scholarships for students in the university’s Advanced Cybersecurity Experience for Students (ACES) program. Launched with support from the Northrop Grumman Foundation, ACES was the first undergraduate honors program in cybersecurity in the U.S.
U.S. News & World Report ranks the University of Maryland Baltimore County (UMBC) fifth in the nation in innovation. Led by Freeman Hrabowski, Ph.D., one of Time Magazine’s 10 best college presidents, UMBC is the home of the Center for Information Security and Assurance.
Maryland also continues to train cyber workers in non-traditional settings such as internships and apprenticeships.
“As the required skill levels have also increased in the wake of a more sophisticated threat, additional resources such as the Baltimore Cyber Range and other training operations have provided a variety of options for those seeking to qualify for cyber jobs as well as those wishing to update their skills,” said McCreedy.
In addition, trade missions and outreach to international companies have showcased Maryland’s cyber capabilities to the world. A trade mission to Israel brought about a partnership between Baltimore-based ETA and Israel-based Cyberbit to form the Baltimore Cyber Range, as well as the decision of ELTA North America to establish its cyber footprint in Maryland. During a trade mission to the United Kingdom, Maryland Commerce signed an MOU to strengthen the relationship between the state and the U.K.’s premier cyber cluster, the Midlands region.
FLORIDA BUILDS ON HIGH-TECH HISTORY
Florida has remained at the forefront of IT innovation since the birth of the IBM PC in Boca Raton. Today, the Sunshine State boasts the nation’s third largest tech industry. The state’s IT strengths are wide ranging—from software to photonics to modeling, simulation and training.
The state quickly IS becoming a leader for companies that have a focus on cybersecurity, including ThreatAdvice, Gartner, Citigroup, DTCC, Reliaquest and KnowBe4.
In December 2017, ThreatAdvice, opened their office in the UCF Business Incubation Center office in Central Florida Research Park. The company plans to add 20-25 employees.
“The biggest reason we came here was the access to talent,” David Brasfield, CEO of ThreatAdvice, told the Orlando Business Journal. “The computer science and research programs at UCF are among the largest in the Southeast, and we wanted to get closer to that talent.
Last spring, Gartner, Inc., a global information technology research and advisory company, announced it would expand in Lee County and create 600 jobs. The company also will invest more than $21 million in the local community. Gartner currently employs more than 1,250 Floridians.
Gartner began its Fort Myers operation nearly two decades ago. Since then, it has grown to become the company’s second-largest office worldwide. Gartner’s latest commitment to Florida follows the successful construction of two successive 120,000 square foot buildings in Fort Myers in 2012 and 2014, and the development of a new world class training facility in 2016 which totaled more than $46 million in capital investment.
“We appreciate the hard work of Enterprise Florida and the incentives approved by the State of Florida and Lee County,” said Gene Hall, chief executive officer of Gartner. “These efforts will help us continue to invest in the local community and support our long-term growth. We look forward to adding new talent to our existing workforce in Southwest Florida.”
Citigroup’s anti-money laundering program is one of the best in the nation and their hub is in Tampa. They have partnered with the University of South Florida (USF) to create a training program for USF students in the field.
“We started a very unique program in cybersecurity that is truly interdisciplinary,” said Moez Limayem, the Dean of the Moma College of Business at USF. “Engineering is teaming up with business, teaming up with psychology, teaming up with criminology to make one of the best programs in cybersecurity.”
Currently, there are more than 300,000 cybersecurity positions open nationwide, and over 12,000 in Florida. Finding the talent to fill those positions is a challenge the state is addressing through programs at universities and colleges.
A recent report by the Florida Center for Cybersecurity (FC2), “The State of Cybersecurity in Florida,” found that the state is well positioned to develop a strong workforce, with nearly 100 cybersecurity certificate and degree programs offered by institutions of higher education across the state.
FC2 was established in 2014 to position Florida as a national leader in cybersecurity through education, innovative research and community outreach. Housed at USF, the Center is a statewide agency that works with all State University System (SUS) of Florida institutions, industry, the military, government and the community to build Florida’s cybersecurity workforce.
“Our mission is to make Florida the national leader in cybersecurity,” said Sri Sridharan, Director of FC2. “We want to build the workforce and show there is so much talent in Florida that businesses will want to come here because of the skillsets available.”
With a strong cyber workforce graduating from the University of West Florida (UWF) and increase in funding from the Homeland Security Department (DHS) and National Security Agency (NSA), the Pensacola area is becoming a hotbed for cyber innovation.
UWF’s Center for Cybersecurity was designated by the National Security Agency (NSA) and Homeland Security Department (DHS) as a National Center of Academic Excellence in Cyber Defense Education. The Center provides leadership to advance cyber defense education among colleges and universities in Alabama, Florida, Georgia, Mississippi, Puerto Rico and South Carolina.
Florida’s Agency for State Technology (AST) has partnered with UWF to develop a curriculum that will deliver training on an ongoing basis, allowing for multiple sessions per year.
“Universities are the essential pipeline into state government, for all personnel talent, but provide even greater utility to the cyber workforce due to the highly specialized nature of the discipline,” said Thomas Vaughn, Chief Information Security Officer for AST.
The UWF Center for Cybersecurity propels innovative cybersecurity solutions and advances workforce readiness through education, training, partnerships and outreach; that outreach ranges from K-12 students to cybersecurity professionals.
“The partnership with UWF not only provides high quality training opportunities for state employees, but also offers exposure for UWF faculty and students to potential employment with the State of Florida,” said Vaughn.
Florida also is home to the Abanacle Cybersecurity Incubator, the second cybersecurity-focused incubator in the U.S. and the fourth cybersecurity incubator in the world.
Abanacle has partnered with Nova Southeastern University (NSU) in Davie, Florida to operate the incubator ([email protected]) which is focused exclusively on cybersecurity companies and technologies. To complement the incubator, Abanacle and NSU will commercialize research at the university level and provide specialized training in cybersecurity.
[email protected] will assist startup companies in cybersecurity with developing and running their business, providing mentorship for each company and delivering a number of business resources that give companies the tools necessary to become successful.
MICHIGAN: THE FUTURE OF CYBERSECURITY
Michigan is forging the future in automotive, defense and advanced manufacturing cybersecurity. It’s home to the first school district in the nation to create a cybersecurity program and the largest concentration of electrical, mechanical and industrial engineers in the nation.
Michigan’s Cybersecurity Initiative, the world’s first comprehensive state-level approach to cyber, is improving the state’s defenses while fostering rapidly growing cyber talent and business environments.
Initiatives include the Michigan Cyber Range, which is training members across the community in cybersecurity response skills; the Michigan Cyber Command Center (MC3), which is enhancing coordination between the state’s fusion and the State Emergency Operations Centers; and the Michigan Cyber Disruption Response Plan development, which will help government, industry and community organizations respond to malicious cyber activity.
“Michigan has set a standard for states in creating a center of excellence in cybersecurity management,” said Sarah Tennant, Strategic Advisor of Cyber Initiatives at the Michigan Economic Development Corporation (MEDC). “We are also the first state to create a chief security officer over both cyber and physical protection.”
The Cyber Range provides an unclassified, private environment to teach, test and train individuals and companies for real-world situations. By providing classes, exercises and virtual infrastructure, it gives companies an affordable and flexible way to prepare for cybersecurity challenges of all types.
Recently, they’ve added two new Cyber Range Hubs at Pinckney Community High School and Wayne State University to their lineup. The hubs are centers for the cyber ecosystem within their communities, providing certification courses in over 20 cybersecurity disciplines.
Courseware is available at places like The Velocity Center, a first-of-its-kind cyber hub in the state. The Center holds classes and events to train the local workforce in cybersecurity techniques and skills.
Currently, there are more than 140,000 people working in the IT and cybersecurity industry in Michigan, more than half of whom are in metro Detroit. “Thriving businesses in the defense, mobility, automotive and IT industries offer ample opportunities for cybersecurity firms to succeed in Michigan,” said Tennant.
Ann Arbor-based Duo Security recently expanded to a central downtown location. Duo, which recently closed a round of fundraising worth $70 million at a valuation of $1.17 billion, provides advanced security solutions for organizations of all sizes, supplying companies with so-called access management tech, which includes two-factor authentication and single-sign on services.
“Duo plans to add to its incredible team of talented employees, thanks to a series C funding to address the emerging security challenges faced by organizations everywhere who struggle to secure their users against breaches in a modern IT environment,” said Tennant. “Duo was recently called out by Facebook’s Chief Security Officer Alex Stamos as his favorite tool made by an information security company.”
Michigan has been seeing a steady influx of companies that are looking to provide cybersecurity resources to industries that have not traditionally been considered tech companies, and there has also been a reinvestment in Michigan from traditional industries.
For example, General Motors will invest $1 billion in its Warren Technical Center and add about 2,600 jobs over the next four years. The new jobs will be spread among vehicle engineering, information technology and design.
GM will break ground in mid-2018 on a new studio building that will surround the iconic Design Dome Auditorium and viewing patio and connect to the existing Design Center. The 360,000-square-foot expansion is the final stage of a multiyear investment in GM’s Tech Center, a National Historic Landmark site.
“We can only begin to predict how mobility will change in future generations,” said Michael Simcoe, vice president of Global Design. “Investing in our creative and skilled team and providing them with inspiring, modern spaces, new technologies and more ways to work together will foster innovation that leads to real solutions for customers.”
Michigan is now staking its claim as the epicenter for mobility and autonomous vehicle technology and testing.
In December 2017, Visteon Corp. and Toyota Motor North America were the first to run tests at the American Center for Mobility (ACM), a federally designated testing facility established for both automakers and suppliers to perform self-certification on connected and automated technologies, as well as cybersecurity work on vehicles.
“We are excited to be open for testing and to have our founders already leveraging the assets of this facility,” said John Maddox, ACM’s president and CEO. “We have been moving rapidly, and along with good input from our founders, a great deal of work has gone into developing this site. Opening our doors is just the beginning as we continue to develop the American Center for Mobility into a global hub for CAV and future mobility technologies to put self-driving cars on America’s roads safely.”
The 600-acre driverless car proving ground sits on the site of the bomber plant Henry Ford built during World War II that came to symbolize the “Arsenal of Democracy.” The center has received investments from Ford Motor Co., Hyundai and AT&T. It includes a 2.5-mile highway loop, a 700-foot curved tunnel, two double overpasses, intersections and roundabouts.
Now in its sixth year, Michigan’s SAE Battelle CyberAuto Challenge™ is a groundbreaking event in automotive cybersecurity that takes place at Macomb Community College in Warren, the heart of the automotive industry.
The five-day workshop gathers automotive engineers, government engineers, ethical “white hat” hackers and students to work on cars in use today. Among other things, the hackathon aims to foster the skills that the automotive industry will need moving forward, by developing young automotive cybersecurity talent and keeping core automotive engineers connected to the cybersecurity community.
Defense is another leading industry that intersects with cybersecurity. Michigan’s defense assets include Selfridge Air National Guard Base and the 100th Airlift Wing in Battle Creek. The state, in conjunction with the Michigan National Guard, has established cyber range extension sites at nearly all bases throughout Michigan.
These ranges allow National Guard professionals to train in cybersecurity disciplines and host unclassified cyber exercises with inter-agency and private partners. Currently, the Michigan Army Guard has roughly 600 Michigan and Top Secret computer specialty positions from the Michigan Army Cyber Protection Team and Air National Guard Cyber Squadron. “These soldiers are a valuable asset to Michigan’s cyber industry and can help fill the demand for cybersecurity talent,” said Tennant. “One of the biggest challenges for everyone is the cybersecurity skills gap. We are looking to fill thousands of positions that we didn’t know would even exist 10 years ago.”
SAN ANTONIO, TX: CYBER CITY USA
Home to one of the top hubs for cybersecurity in the nation, Texas already is positioned to tap into the exponentially expanding national market for cyber defenses. San Antonio’s military presence has created a critical mass of cybersecurity operations in the area. Dubbed “Cyber City USA,“ San Antonio is now home to more than forty cybersecurity headquarters and the nation’s second-largest concentration of cybersecurity experts—with more than 1,000 employed by the U.S. Air Force within the Port San Antonio campus.
San Antonio’s cyber strengths are further amplified by the 24th Air Force, located at Joint Base San Antonio—Lackland, the operational warfighting organization that establishes, operates, maintains and defends Air Force networks; NSA Texas, which conducts worldwide signals intelligence, cyberspace operations and information assurance operations at a former Sony chip fabrication plant that’s now a sprawling intelligence center; and the Center for Infrastructure Assurance and Security, a cybersecurity center at the University of Texas at San Antonio.
Already a major hub for digital startups, Texas also is emerging as a go-to location for cybersecurity startups thanks to its robust infrastructure, network of industry partners and access to venture capital. The Consumer Technology Association (CTA) recognized Texas as top state for championing innovation-friendly policies, encouraging entrepreneurial activity and attracting investment.
Infocyte, Inc., which develops proactive cybersecurity solutions focused on breach discovery and malware hunting, has seen significant growth since opening in San Antonio in 2014. In 2017, the company doubled in size to 25 employees, and in 2018 it raised $5.2 million in its Series B Venture Capital round, bringing the total investment raised to $8.6 million.
According to Infocyte co-founder Chris Gerritz, the concentration of cybersecurity talent in San Antonio due to the presence of Air Force Cyber, NSA Texas and the hundreds of companies supporting those missions make San Antonio very unique.
“We have an incredible talent pipeline here in SATX which ensures we can be on top of the latest cyber threats,” said Gerritz. “We are able to recruit top-tier talent easier and for 30 percent less than what our peers/competition are paying in Silicon Valley or D.C. (mostly due to the cost of living difference and competition over the smaller pool of candidates).”
Infocyte has developed a software as a service platform, Infocyte HUNT, to enable any security professional to assess an organization for possible security breaches, unauthorized access and malicious software (malware).
In June 2017, Duo Security, one of the fastest-growing cybersecurity firms in the world, doubled its Austin footprint with a move to Austin’s historic Bosche-Hogg building. The Michigan-based firm counts Facebook, NASA, Toyota, Twitter and Virginia Tech among its clients.
Forcepoint LLC (formerly Websense), a leading global cybersecurity firm, relocated its headquarters from San Diego, California, to Austin, TX, in 2014, creating 445 new jobs and investing over $9.9 million. For the project, the company received a $4.5 million grant from the Texas Enterprise Fund (TEF)—Texas’ deal-closing incentive fund, awarded through the Office of the Governor.
In 2016, San Antonio technology companies raised $600,000 to create an incubator program, Build Sec Foundry, to mentor young cybersecurity companies with management advice and help in finding funding. The effort is a public-private partnership born out of CyberSecurity San Antonio, an industry-driven program sponsored by the San Antonio Chamber of Commerce and local cybersecurity businesses.
In January 2018, Governor Greg Abbott announced a new training program, in partnership with the SANS Institute, designed to encourage young women to become involved in the field of cybersecurity. And in August 2017, the Governor’s Office approved almost $500,000 in federal grant funding to provide intensive networking and information security training and job placement support for individuals in the Lower Rio Grande Valley.
In addition, more than 25 Texas universities offer cybersecurity certificate or degree programs including Texas A&M’s Cybersecurity Center; the University of Texas at San Antonio’s cybersecurity program; and the University of Texas at Dallas Cyber Security Research and Education Institute.
UTAH DATA CENTER: CYBER HUB FOR U.S INTELLIGENCE
The Utah Data Center, also known as the Intelligence Community Comprehensive National Cybersecurity Initiative Data Center, is a data storage facility for the National Security Agency that is designed to store data estimated to be on the order of exabytes or larger. Its purpose is to support the Comprehensive National Cybersecurity Initiative (CNCI). The NSA leads operations at the facility as the executive agent for the Director of National Intelligence. The Utah Data Center is located at Camp Williams near Bluffdale, UT between Utah Lake and Great Salt Lake and was completed in May 2014 at a cost of $1.5 billion.
Brigham Young University and Utah Valley University are the only two schools in the state with on-ground baccalaureate and graduate degrees in cybersecurity. However, you can earn a certificate, complete an online degree and/or connect to high-level research at a handful of other Utah schools. Here’s a roundup of who’s doing what:
BYU is recognized by the National Security Agency and the Department of Homeland Security as a Center of Academic Excellence in Cyber Defense Education. The reason for that accolade is its Cybersecurity & Systems Research Laboratory. The lab’s recent research efforts include using drones to map wireless networks, predicting system and network failures via log files for a project called SEAHORSE and creating a classification system for attacks against physical infrastructure, such as power plants.
After a second-place finish in 2016, the BYU cyber defense team took third place at the 2017 National Collegiate Cyber Defense Competition in San Antonio. Such students remain active on campus too. The BYU Cyber Security Students Academic Association, known as ITSec, meets twice a month to listen to guest experts, runs a yearlong Capture the Flag competition and hosts an annual student-led cybersecurity conference.
Utah Valley University established the Center for National Security Studies, which has begun embracing cyber issues. The Center boasts a lively event calendar and has recently sponsored panel discussions on privacy and protection in the digital age as well as Russia’s use of cyber warfare.
UVU also has positioned itself to recruit new cybersecurity junkies. In 2015, it created the annual Math, Cryptography and Cyber Security Conference and invited every high schooler in the state to come take workshops and learn code-making basics from UVU faculty.
University of Utah doesn’t have a cybersecurity degree program, but postgraduates can forge a research career under faculty at the School of Computing. Active professors include Matt Might, who won a $3 million Department of Defense grant in 2015 to develop software that alerts programmers to vulnerabilities in their code. That same year, his colleagues, Eric Eide and John Regehr claimed $500K from the National Science Foundation to create Xsmith, which finds software defects in programming language compilers and interpreters.
Southern Utah University’s Computer Science and Information Systems Department, which runs an online master’s plus an on-ground associate degree in cybersecurity.
Welcome! To bring you the best content on our sites and applications, Meredith partners with third party advertisers to serve digital ads, including personalized digital ads. Those advertisers use tracking technologies to collect information about your activity on our sites and applications and across the Internet and your other apps and devices.
By clicking continue below and using our sites or applications, you agree that we and our third party advertisers can:
transfer your personal data to the United States or other countries, and