What is HITECH (Health Information Technology for Economic and Clinical Health) Act of 2009? – Definition from WhatIs.com – TechTarget

HITECH Act and meaningful use

As it was originally enacted, HITECH stipulated that, beginning in 2011, healthcare providers would be offered financial incentives for demonstrating meaningful use of EHRs until 2015, after which time penalties would be levied for failing to demonstrate such use.

Providers were able to start using EHRs as late as 2014 and avoid penalties, but the incentive payment they were eligible to receive was less than that of earlier adopters.

The rollout of meaningful use happens in three stages; providers must demonstrate two years in a stage before moving on to the next one.

Because adoption for stage 2 has been slow, the Centers for Medicare and Medicaid Services (CMS) announced in mid-2014 that it would put stage 3 off until 2017. Stage 3 of meaningful use was an option for providers that year, but it became mandatory for all participants in 2018. However, several groups have requested that stage 3 be either canceled or at least paused until 2019 due to concerns about provider and vendor readiness.

MACRA (Medicare Access and CHIP Reauthorization Act) included a category called Advancing Care Information that effectively replaced meaningful use while retaining certain aspects of the program. Healthcare providers are still required to report on meaningful use stage 3 measures, but will be able to choose which measures are best suited to their practice.

The Innovation Gap (Part 2): How To Reboot The Justice System On Technology – Above the Law

How is it that we can be at this time of unprecedented innovation in legal technology and yet see the justice gap only grow wider? How is it that the justice system is unable to meet the needs of 80 percent of low-income and 60 percent of moderate-income people, when there are so many cutting-edge tools available to streamline and broaden the delivery of legal services?

As I wrote last week in the first part of this two-part column, the problem is not a lack of innovative technology. It is, quite simply, that the legal system resists innovation. Last week, I explained why I believe the legal system resists innovation. This week, I’ll offer 10 suggestions for what to do about it.

The bottom line is this: If we are ever to provide access to justice to all who are in need, then legal technology must be a central part of the equation — it must be adopted and deployed to its fullest extent. And if technology is ever to be adopted and deployed to its fullest extent, we need to reboot the mind frame of the legal system.

How do we do that? Here are 10 suggestions.

  1. Require technology competence.

We fear that which we do not understand and see risk in that which we cannot command. Such is the case with technology. Yet too many lawyers remain technologically incompetent and therefore hinder or obstruct its broader use. Only by requiring lawyers to become competent in technology will we ever get past the fear and ignorance that hold us back.

On this front, we are already making progress. In 2012, the American Bar Association approved a change to Comment 8 of Rule 1.1 of the Model Rules of Professional Conduct, governing lawyer competence, to make clear that competent lawyering includes staying abreast of and understanding the benefits and risks of technology. So far, 28 states have adopted this change.

The potential implications of this rule, I think, are much greater than most lawyers realize.

Two major ethics opinions provide the best guidance we have so far on the meaning of the duty of technology competence. One was a 2015 California opinion dealing with technology in the context of eDiscovery. The other was a 2017 American Bar Association opinion dealing technology in the context of protecting clients’ privileged communications and data.

On three overarching points, the two opinions agree:

  • First, both say that, at the outset of every matter, the lawyer has a responsibility to assess the technological issues that may arise and whether the lawyer has sufficient knowledge and skills to competently represent the client with respect to those issues.
  • Second, both say that, if the lawyer decides that he or she lacks the competence to handle those issues, then it is OK to bring on someone else who does have the necessary knowledge and skills, such as a colleague or consultant.
  • But third — and this is critical — the lawyer in charge of that case, even when a consultant is brought on, retains overall responsibility for supervising the case, for supervising everyone working on the case, and for ensuring that the client’s confidences and best interests are protected, including with respect to the technology issues.

This is an ethical Catch 22. Even if you decide you’re not fully competent with regard to the technology in the case, you still have to have enough competence to be able to make that initial assessment, and you still have to have enough competence to be able to supervise the technology issues for the life of the case.

Both opinions offer examples of the kinds of issues a competent attorney should be able to assess. Most attorneys, I believe, do not have the degree of competence these opinions suggest they should. The ABA opinion, for example, requires that attorneys be able to understand:

  • The nature of any threat to email or data;
  • How client confidential information is transmitted and where it is stored; and
  • How to use reasonable electronic security measures.

Most lawyers do not even know how to encrypt an email, let alone understand how data is transmitted or how to identify a security threat. These ethics opinions should be wake-up calls to the legal profession. It is time to become competent in technology, and the degree of competence required of you may be greater than you think.

  1. Require – or at least provide – tech training.

Lawyers should be required to undergo technology training. So far, one state has taken this leap. Florida mandates that all lawyers, as part of their continuing legal education obligation, get training in technology. No other state has yet followed suit, but my guess is that others soon will.

In the absence of requiring technology training, we need to make it widely available and provide incentives for lawyers to take it. One source of such incentives is clients. Many are already demanding that their lawyers be skilled in technology.

An example of this in practice is the Procertas Technology Assessment. Developed by Casey Flaherty when he was general counsel of Kia Motors, the goal was to screen outside counsel for those who lacked basic proficiency in the technologies lawyers use every day: word processing, spreadsheets, and PDF. Not surprisingly, many failed.

Flaherty took the assessment private, creating the company Procertas to make the assessment more widely available. It not only tests lawyers on basic skills, but it also trains them on the skills in which they fall short. For corporate clients, this makes a lot of sense. Lawyers who are proficient in technology are likely to deliver their services more efficiently, at lower cost and with better results.

Another option is to offer formal training programs for lawyers about technology. Unfortunately, outside of CLE programs, few exist. That is why it was notable earlier this year when Suffolk Law School launched a program offering a certificate in legal innovation and technology. Starting this summer, Suffolk will offer a series of six online courses. Lawyers who successfully complete all six will be awarded certification.

  1. Push (shame) law firms to innovate.

If law firms continue to resist innovation, then why not shame them into innovating?

This was the suggestion of Jim Sandman, president of the Legal Services Corporation, in several recent speeches. Publications such as The American Lawyer routinely rank law firms according to factors such as profitability, diversity or pro bono service. These rankings incentivize firms to do better, so their rankings will improve in the next round. Why not, suggested Sandman, rank law firms on innovation and use of technology?

No magazine took Sandman up on his suggestion, but Daniel W. Linna Jr. did. Last August, Linna, director of The Center for Legal Services Innovation at Michigan State University College of Law, launched the Legal Services Innovation Index. It endeavors to do just what Sandman suggested — rank and compare law firms according to the degree to which they are innovative and make effective use of technology.

Dan makes no secret of the fact that this is directed at clients as much as at the law firms. “My hope,” he says, “is that clients, including legal departments, will consult this index and engage in deep discussions with their lawyers about how to improve legal services delivery.”

  1. Push (shame) law schools to teach tech and innovation.

The same idea applies to law schools. Overall, law schools do an abysmal job of preparing students for 21st Century law practice. Law schools do not teach the technology tools or skills students will need to succeed. Of course, there are exceptions. A small number of schools are leading the charge for teaching innovation and technology. But other schools need to be pushed or shamed into following suit.

Once again, Dan Linna has stepped up to the plate, building on his Legal Services Innovation Index to launch the Law School Innovation Index. The index rates law schools on the degree to which they teach students about innovative technologies and engage students in using innovative technologies.

One of the schools that’s on that list is Brigham Young University Law School in Provo Utah. Just last week, I wrote about BYU’s LawX lab and its release of an online tool, SoloSuit, developed by students to help low- and moderate-income individuals respond to debt collection lawsuits. The idea of the lab is to have students, over the course of a single semester, research an access-to-justice problem, design a solution, and then build that solution.

This is not only great training for students. It is also of real benefit to those who are without legal help. Although the tool was created to help with cases in Utah, it was designed to be adaptable to any state and even to other kinds of cases. Imagine if every law school were doing this.

  1. Let go of the idea that lawyers, alone, can close the gap.

We have to let go of the idea that lawyers, alone, can ever close the justice gap. The legal establishment continues to cling to the belief that all we need to do is throw more lawyers at the problem and it will go away — get lawyers to do more pro bono, get more funding for legal services, etc. I’m all for more pro bono and more funding, but lawyers, alone, will never be enough to solve this problem.

Consider these findings from Gillian Hadfield, an economist and law professor at the University of Southern California Law, as reported in the ABA’s 2016 Report on the Future of Legal Services in the United States:

  • Lawyers would have to increase their pro bono efforts to over 900 hours a year each to provide some measure of assistance to all households with legal needs.
  • To provide just one hour of legal aid to everybody with an unmet legal need would require a budget of $50 billion a year. The current expenditure on legal services from all public and private sources is $3.7 billion.

Lawyers, alone, can never close the justice gap. We have to open the system to other and more innovative ways of delivering legal services.

  1. Let go of the idea that lawyers are the gold standard.

We have to abandon the belief that lawyers are always the gold standard in delivering legal services. Lawyers hold dearly to this idea and it is a big part of what drives the organized bar’s protectionism. A legal service delivered by anyone (or anything) other than a lawyer, the belief holds, is necessarily inferior.

That is just not true.

Take, for one example, eDiscovery review. During discovery in large litigation matters, it may be necessary to review thousands or even millions of documents to determine which are potentially relevant. Of necessity, lawyers increasingly use technology to assist with this review — it is, after all, physically impossible to manually review such large numbers of documents.

That said, lawyers generally believe that anything short of manual, eyes-on review by a lawyer will produce less-accurate results. Studies, however, demonstrate just the opposite. Technology-assisted review tools that use machine learning can, in fact, produce more accurate results than lawyers alone, and do so in a fraction of the time and a fraction of the cost. The same is true for machine-learning technologies in large-scale contract review.

The fact is that, for certain tasks, technology can be used to enhance the delivery of legal services, even when a lawyer is not involved or only minimally involved, delivering better results with greater efficiency.

  1. Rewrite the regulatory rules.

If ever there is to be meaningful change, we need to rewrite the rules that regulate the practice of law and the delivery of legal services.

The scope of the problem is such that it is never going to be solved as long as lawyers have a monopoly on the delivery of legal services. We need to open up the delivery of legal services to allow corporate ownership of law firms, because corporations are going to invest in the innovations and scale that will pave the way for legal services to be delivered more efficiently and broadly. To quote Gillian Hadfield:

Meeting demand will require a massive shift in the production technology for legal services to dramatically reduce costs. The only way to achieve the kind of scale and innovation needed is through the corporate practice of law.

We also need to open the delivery of legal services to nonlawyer providers, such as Washington state has done with its limited license legal technicians.

Until we change the rules to allow meaningful and substantive innovations in the delivery of legal services, we’ll never make the progress we need.

  1. Measure what works and what doesn’t.

Remarkably, there is scant evidence of what works and what doesn’t in the delivery of legal services. In the history of U.S. legal practice, there have been just 50 randomized control trials ever of what works and what doesn’t. The way we practice is basically to say, “I’m a lawyer, trust me, I know what I’m doing,” or “I’m a judge, trust me, I know what I’m doing.” We make critical decisions and life-altering recommendations based on our guts, rather than on evidence and data.

“In no field is resistance to evidence-based thinking more ferocious than in United States legal practice,” says Jim Greiner, director of Harvard Law School’s Access to Justice Lab. Greiner and his team are working to change this state of affairs. They are using randomized control trials to compile evidence-based research aimed at better understanding what works or doesn’t in narrowing the justice gap and delivering legal services.

So is the legal profession welcoming Greider’s research with open arms? Not really. In fact, he’s met a fair amount of resistance from others in the legal field, in part because his research doesn’t always show what people want to hear.

Take this question: Is a tenant in housing court better off receiving limited, lawyer-for-the-day assistance or full legal representation?

Ask 10 lawyers that question and they’ll all give you the same answer: Full lawyer representation is always going to be better than the lawyer for the day.

Yet, in one of his earliest and most-controversial studies, Greiner found otherwise. In a study of tenants facing eviction in housing court, he found no appreciable difference in outcomes between tenants who received full legal representation and those who received limited LFTD assistance.

People can and will quibble with this research and Greiner himself has done subsequent research with varying results. But that is exactly the point. We need evidence-based research to get the answers to these questions. The more we can understand what truly works, then the more progress we’ll be able to make.

  1. Encourage funding sources to promote innovation and testing.

Organizations that fund legal services should use the power of their wallets to promote innovation and testing.

The Legal Services Corporation has led the way in this respect through its annual Technology Initiative Grants, providing roughly $5 million a year to fund technology innovation in the delivery of legal services. However, beyond these TIG grants, investment in A2J technology is woefully inadequate.

On the state and regional levels, some forward-looking organizations are also investing in A2J technology. The Florida Bar Foundation, to name just one, has been a leader in pushing innovation and technology among its grantees, as have been both the Chicago and Illinois bar foundations among theirs.

But, having recently served as president of the Massachusetts Bar Foundation, I am sensitive to the fact that funders face a difficult balancing act. Thanks to severe cutbacks in IOLTA and other funding sources, their grantees are struggling to meet the demand for their services. To divert money towards technology development is to take it away from direct services.

We can no longer afford to be short sighted. The struggle to keep up with demand will only get worse unless we innovate. Bar foundations and others who fund legal services need to invest some of that money in technology innovation and development. We can’t keep spending all our money trying to plug the dam when what we need is a new dam.

  1. Push courts to innovate.

Courts must be pushed to innovate. By innovate, I do not mean find better ways to keep doing what they’re doing. Courts need to think differently about who it is they serve and what it means to deliver justice.

Many courts are working hard to both keep up with caseloads and also develop better ways to meet the needs of unrepresented litigants. But courts are stuck in an antiquated model of dispute resolution, a model designed in another era for a system of lawyers and judges, and not for a flood of unrepresented litigants.

Courts need to rethink what they do and how they do it. Perhaps the best example of how technology can be used to resolve disputes is eBay. Early on, eBay realized that, if it was to succeed as a platform for buying and selling, it needed a way to resolve the disputes that would inevitably arise. And it had to be a system that could deal with potentially millions of disputes, often involving small dollar amounts, between buyers and sellers who could be anywhere in world.

Traditional court systems could never work for eBay, so it built its own dispute resolution system. It has been a remarkable success, handling 60 million disputes a year and successfully resolving more than 90 percent, most without third-party assistance.

Courts need to turn away from business as usual and examine alternative ways to address the disputes that come to them and how technology can be part of a solution.

Now, Get Out There and Proselytize

Many of you reading this, like me, live in a legal tech bubble. We get it, and we tend to talk, work and interact with others who get it. We get that technology and innovation enable us to better do our jobs and serve our clients. We understand the potential for technology to play a key role in closing the justice gap.

But those of us in this bubble are the minority. The majority of lawyers remain fearful of technology, ignorant about technology, or just don’t see it as important.

I’ve outlined 10 recommendations for rebooting the legal system’s mindset around technology. But there is one other important piece. Those of us who get it need to break out of our bubbles and spread the word — proselytize, if you will. Blog. Speak. Spread the word.

If you are a lawyer, write about how you use technology in your practice and the benefits you see from it. If you are a developer, let others know what you are working on and what it can do. If you work at a legal aid organization, tell others about how technology helps your organization and clients.

In this post, I’ve written abstractly about how the system needs to change, without directing those recommendations at any specific actor. But everyone who is reading this can take action starting today, by spreading the word about technology and by lobbying bars, courts and institutions to become open to innovation. Ultimately, education is the key to changing the mindset of the legal system, and each of you can play a role in making that happen.


Robert Ambrogi Bob Ambrogi

Robert Ambrogi Bob AmbrogiRobert Ambrogi is a Massachusetts lawyer and journalist who has been covering legal technology and the web for more than 20 years, primarily through his blog LawSites.com. Former editor-in-chief of several legal newspapers, he is a fellow of the College of Law Practice Management and an inaugural Fastcase 50 honoree. He can be reached by email at ambrogi@gmail.com, and you can follow him on Twitter (@BobAmbrogi).

Western Digital’s CIO Modernizes Technology In The Face Of Two Major Acquisitions – Forbes Now

The integration of a major acquisition is the bane of the existence of many executives. Chief information officers have special challenges during such scenarios, since, after all, they must think about the people, processes, and technologies that must be integrated. This is an enormous amount of change to usher in.

When Steve Phillpott became CIO of $19 billion revenue developer, manufacturer and provider of data storage devices and solutions, Western Digital less than two and a half years ago, the company was nearly a third of its current size. With the acquisition of HGST in 2015 and the acquisition of SanDisk in 2016, there was no avoiding the fact that this would be a heavy lift for Phillpott and his colleagues. This was all announced in his first months on the job.

Phillpott noted that across the technology stacks of the three companies, in most areas, two-thirds of employees would be impacted, as the “winning” solution would be named. Phillpott recognized this as an opportunity to choose best-in-breed solutions across the technology portfolio. The mandate for change that any integration brings about would be a boon.

This would lead to the integration of more than 3,000 applications and would test the company’s change management practices, but Phillpott and his team have made enormous progress, as he notes herein.

(To listen to an unabridged audio version of this interview, please visit this link. To read future articles like this one, please follow me on Twitter @PeterAHigh.)

Peter High: You are the Chief Information Officer of Western Digital. In its current generation, Western Digital is the combination of three multi-billion dollar organizations: Western Digital itself, the 2012 acquisition of HGST, and the acquisition in 2016 of SanDisk. I know you took an interesting approach to integrating these companies. Please explain.

Western Digital CIO Steve Phillpottcredit: Western Digital

Steve Phillpott: We integrated three large multi-billion dollar, Fortune 500 companies into one future Fortune 150-ish company. You are looking at integrating systems, integrating processes, and integrating technologies. As we started on this journey, this integration became a great opportunity to transform the company. By transforming the company I mean looking at those applications systems and processes that we have today, thinking where we want to be in a couple years, and starting to lay the foundation for that journey.

Consider ERP as an example. Across the three legacy sub-companies, we had three different ERP systems. Going forward, we could have picked any one of the three. In a typical acquisition where you have two companies; one large company, one small, it may default to the larger company’s ERP. Two like-sized companies integrating together, you may flip a coin, or you pick the best one and go forward. With three, it provides an interesting dynamic because, at a minimum, two-thirds of the company are going to have to go through change.

Our thinking was if two-thirds of the company are going to have to go through that massive amount of change, why should we not look at a newer, best-in-breed solution and have the entire company go through that change. What that does is it allows us to transform a foundational application that will support us as we grow to $20 billion, $25 billion, and beyond. It became a great opportunity to go through and rebuild processes and applications that we knew would not scale. We were able to revisit chart of accounts, revisit cost centers, and revisit the reconciliation process with an eye to the future and a focus on ensuring those processes would support us as we grow past a $20 billion company.

High: You have had to rationalize around 3,000 applications. Could you share how far you have come in that process, as well as what learnings you have had?

Phillpott: We had roughly 3,000-plus applications across the three major companies, and then we added a couple more acquisitions after that, which added more applications to the mix. I would say we are still early in the journey, but we have completed some major activities and 2017 was a very productive year for us. We focused on getting a lot of the collaboration and communication tools correct. Communication tools allow for the flattening of the organization, which is increasingly important as you are trying to go through these integration activities.

The speed at which companies can effectively collaborate is essential in helping move these integrations forward and trying to harmonize the processes in the system. The other interesting thing about focusing on those collaboration and communication tools is it also sets us up well for future M&As. Once we get those in as we move forward and have more acquisitions, we can bring them into the mix much quicker. We determined best-of-breed technologies across a variety of communication and collaboration tools.

Globally, we have everybody on the same email, the same file sharing around Box, the same intranet, Jive, WebEx, Jabber – those core collaboration and communication tools. If you look beyond those initial communication and collaboration activities, we have started to migrate many of the legacy applications. Now we are on a global Human Capital Management system which we consolidated with our CRM. We consolidated on a global ServiceNow instance which was interesting because that is another area where we try to get everybody’s information into one area, but it is diverse in terms of what activity we need to help the end users with.

We shut down over 18 data centers, and we have consolidated most of the security activities including common single sign-on architecture. During that process, we have learned and matured things. One of the areas that I think has been most important is around governance. An integration journey like this is a long one, and you cannot do everything at once. You must have good governance in place that allows you to build out the programs.

We have program work streams, such as an ERP program workstream, a workstream around product lifecycle management, and a master data management workstream. We have all these program work streams. These allow us to start identifying what that journey is going to be, what that application roadmap is going to be, what applications we want to retain, and then what applications that are going to eventually be retired and feed into our synergy. The governance was a big part of getting things started.

The second learning and understanding is not underestimating change management. As much as the integration activities are around the applications and the technology, it is really the people that are changing. That change management tends to be a significant activity. What we have done for these program work streams is we set up a change management program within them. For example, each phase of the ERP has a change management program to ensure we are bringing along the users at that same rate. The goal is that the users will be ready when we finish the project. The users will be trained, they will know what to do, and they will know the benefits.

We have done that for all our large programs where we are building in that change management capability. Those are a couple of the big learnings.

High: As you thought about the change, what criteria did you use for sequencing?

Phillpott: There are several criteria. As I mentioned the collaboration and communication tools were critical. Also, there are certain aspects of security that were critical. Then there are foundational things that we prioritized, such as on the ERP side, around the finance topics that needs to get started, and around master data management. After that, we can look at some of the other activities, maybe where we have other urgent needs, or perhaps around quick wins. We weaved some of those in.

Then the third bucket was interim and integrated reporting. We are on this three-year or four-year journey and we set the expectation that in phase three or phase four, certain capabilities are going to be available. From a business standpoint, we may not be able to wait that long to get that capability, and so what we have is a separate work stream on interim reporting or integrated reporting. This is looking and saying, “In ERP phase three we are going to get certain reporting capabilities that are going to be consolidated across the three subs.”

We cannot really wait that long. What we will do is we will spin off a parallel project that will do some level of interim reporting capability. We went through an exercise, we looked at what are all the big areas, we built our three by three scorecard, then the stuff that was in the upper right we picked four or five and started those. Those give us short-term reporting capabilities, but hopefully, what they also do is help prime and prep some of the prerequisites for those future ERP deployments while giving the business some capability earlier, so they do not have to wait to the end of the journey.

High: I know that you have developed a predictive analytics as a service offering. Could you share some details on this, perhaps some of the people/skills aspects or partners that you have engaged to help bring it to life?

Phillpott: The predictive analytics as a service is part of our big data and analytics platform. We looked at how to build an environment and provide a flexible platform that is future proof to the greatest extent possible. In an organization as large as ours there are a lot of diverse user groups, and there are various levels of maturity in analytics. We wanted to make sure that the platform itself had the capability to support all these users. It is great that we all want to get to 100% [artificial intelligence/machine learning]-powered learning, but it takes a long time to bring an entire organization there.

We wanted to ensure that our big data platform could support a variety of workloads, technologies, and architectures. Predominantly, the big data platform supports manufacturing and operation capabilities, trying to look at how we improve yields and the performance of our manufacturing operations. If we look at the levels, we have a first horizontal level which is around all the data that we collect, shop floor data, sensor data, quality data, test log data, etc. That all comes into our centralized big data platform.

On top of that is the process layer, where we extract and establish the digital DNA of the data. We will have technologies from Hadoop and Cloudera. We will use Multiple Parallel Processing (MPP) engines from Teradata and Amazon Redshift. We will provide NoSQL capabilities, MongoDB, HBase, and Neo4j for graphics. Then we will also have machine learning capabilities, SPAR, TensorFlow, and streaming capabilities from technologies like Apache. That is all at that process engine layer.

Now, with the data layer and then this process engine layer, we can go back out into the business and provide a wide variety of analytics capabilities throughout the business to realize value. We look at a couple of those towers and work from least complex to most complex. On the least complex side, you will have things like visualization capabilities, predesigned reports, dashboards, etc. You will use technologies like Tableau or Spotfire reporting for this data.

As we move up the maturity curve, that next group of folks is getting into more ad-hoc queries and predictive analytics. We look at event drill down data. We focus on yield laws, excursions, quality issues, and we use tools like SAS, JMP, MATLAB, R-Studio, etc. Now, we keep moving up the stack, and we provide some capabilities to do AI-powered Business Intelligence. We take some of the learnings from artificial intelligence, but we feed it into the more traditional BI capabilities, providing drill-down capabilities. [This includes] event based outcomes, asset health monitoring, etc. We are using more advanced AI machine learning, but it puts it in a more consumable format.

Then on the far top of the maturity learning curve, we start getting into the artificial intelligence powered learning. We leverage self-healing systems, advanced machine learning, or neural networks, and we use that type of technology to improve yield management. Or we might use those same machine learning algorithms to identify and optimize our test processes.

One of the biggest investments in hard drive manufacturing can be test equipment. We have neural networks and machine learning that help us optimize that test environment which can help us save hundreds of millions of dollars in capital.

Having those four different categories of analytics capabilities allows the users to come in at any maturity level. Operations users can come in at that visualization layer, and they can use tools they are familiar with like Tableau or Spotfire. Or we can extend that all the way up to advance the machine learning capabilities for the data scientist that are trying to help us make not just small incremental steps but massive, large steps by automating some of this work.

High: You have also focused on high-performance computing. I wonder what your vision is for that and whether it was a tough sell bringing that into the business?

Phillpott: For the high-performance computing [HPC] area, I do not think many people realize that in the making of hard drives and SSDs, we do a massive amount of engineering simulations. We have significant high-performance computing capabilities to help us do those engineering simulations. The drive when I started this program was two big buckets. One, how can we improve the time to market, and how can we speed up the innovation cycle for how we build our products? Two, how can we reduce the cost of that development work?

We set up our high-performance computing environment like the big data environment in the sense that we wanted to have the right technology and the right architectures that were tuned for the workload of the engineers. I have four types of high-performance computing clusters: on-premise CPUs [central processing units], on-premise GPUs [graphic processing units], cloud CPUs, and cloud GPUs.

Now, with CPU-GPU clusters on-premise and CPU-GPU clusters in the cloud, we can help the engineers through a whole variety of workloads and use case scenarios. For example, one of the use cases will be what we call hyperscale CPU clusters. This is where we are doing product design optimization and we are looking at Head-Disk interface simulations. We look at millions of these. We have moved some of those simulations up to the cloud. In those cases, we run some of the world’s largest engineering simulations where we use more than 80,000 cores.

It takes what was previously a 30-day simulation job and reduces the simulation time to eight or nine hours. We have significantly reduced the development timeframe and that leverages the cloud CPUs. A use case around the GPUs are where the GPU clusters can accelerate some of these heavy workloads, especially ones that require machine learning.

The GPUs can operate 40 or 50 times faster than the CPUs on some of these ML workloads. Between the GPUs on-premise and the GPUs on the cloud, we can figure out what is the right fit for engineers. Another one is leveraging some of the container architectures. We have built some containerized HPCs to help us with things like solid-state drive, system behavior simulations, etc. The last use case is helping on the cost side. Those three that I just talked about help reduce the cycle time of development as well as the cost.

As we look at cloud technologies, we have been able to run some of these engineering simulations in Amazon Spot instances. This allows us to get compute at about 70% off the cost. As long as we can monitor the timeframe for getting these instances, which might stretch it out a little bit, there is a very significant cost benefit. Right now, we run several workloads, and we just let them sit in Spot Instances at a significantly reduced cost. Again, it is about trying to innovate, giving the engineers the right technology for the right job, and helping them accelerate the business value.

High: As is clear from your description of the business that you are in, you are a Chief Information Officer at a tech-centric organization. In these types of organizations, tech talent is disseminated across the enterprise, as opposed to concentrated solely within IT itself. I am always interested in how CIOs at tech-centric organizations differentiate themselves and their departments. How do you and your team do so?

Phillpott: I am surrounded by extremely smart and talented people. What that does is it keeps you and the team on your toes constantly. It is all about figuring out how to add value. Ultimately, it is the IT side and the business learning more about each other, because recognizing and implementing new opportunities will come more easily as we know more about each other. One of the ways to accomplish this is by partnering with the business and having business partners from IT gain a deeper understanding of what the business is looking for and what their pain points are.

It is also looking for those opportunities where we can partner with the business on new products. For example, Western Digital on Western Digital, where we become the first customer for products. An example of this is our ActiveScale product which is five petabytes of S3 compatible object storage in a single rack at a very low total cost of ownership. IT was the first customer to deploy that and since deploying that, we are continually innovating new use cases for it. We do e-discovery. We do virtual server images. We do backups to our big data repository. We archive engineering data to it. We are continually driving new use cases on how we can use that.

The other way we add value is by reaching out to external partners. For example, reaching out to Splunk which is one of the technologies we use on the security side. We worked with them to leverage our ActiveScale on the back end. We also work with server backup companies so that our server backup stream directly to our ActiveScale product. IT is reaching out and helping with some of these, and we are working with the business on how we can increase the use cases for our technology.

Then, one of the other ways is around bringing in new technologies that allow us to innovate faster. For example, we talked about the HPC environments. The HPC environments are a perfect example of where we provide newer technology that shortens the innovation cycle or helps automate some of the lower level activities that free up engineers to focus on high-value activities. Those are just a couple of the areas where, with a close partnership with the technology groups, we can help them add value.

High: We talked a lot about rising technology trends and new technologies that you are integrating into the enterprise, artificial intelligence being primary perhaps. Are there other trends as you look two or three years forward that you might have begun to explore or that excite you?

Phillpott: There are a couple of areas. One of the biggest areas is what we are doing around artificial intelligence and machine learning, and we also have our core big data platform and analytics capabilities. This is an area that is innovating at an incredibly fast pace and we are trying to keep up with how this technology is changing. It has been exciting to watch since it seems like every month or every six months, there is some new major development. We have some innovative capabilities there where we allow a certain amount of resources to innovate on.

What is the next new architecture, what are the next new capabilities around artificial intelligence, what are some of those new tools around machine learning, what is happening with the industrial IoT and streaming architectures? I think for us, artificial intelligence and machine learning are the areas that are going to mature the most over the next couple of years. We are looking for some extremely significant value and capabilities from these technologies.

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book is Implementing World Class IT Strategy. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs. Peter moderates the Forum on World Class IT podcast series. He speaks at conferences around the world. Follow him on Twitter @PeterAHigh.

MetroHealth’s CMIO on Leveraging IT To Push Forward into Value-Based Care and Patient Engagement – Healthcare Informatics

Click To View Gallery

Founded in 1837, Cleveland’s MetroHealth System is an integrated health system operating three hospitals, one of which, MetroHealth Medical Center, serves as Cuyahoga County’s public safety-net hospital. Annually, the health system handles more than one million patient visits, including more than 100,000 in the emergency department, one of the busiest in the country.

In many ways, MetroHealth is at the forefront of health IT and the use of technology to enhance clinical care. In 2014, the health system was designated as Stage 7 on the ambulatory electronic medical record adoption model (A-EMRAM) by HIMSS Analytics, the research arm of the Chicago-based Healthcare Information and Management Systems Society (HIMSS). Stage 7 represents the highest level of EMR adoption and indicates a health system’s advanced electronic patient record environment. MetroHealth was among the first safety-net health systems in the country to reach Stage 7 status, and the first to do so using the Verona, Wis.-based Epic Systems. In addition, the health system continues to move forward into population health management and value-based care and payment models.

David Kaelber, M.D., Ph.D., is the chief medical informatics officer (CMIO) at MetroHealth System, a position he has held for the past nine years, and is leading or involved in a number health IT initiatives at the organization. Dr. Kaelber, who also has a Master of Public Health degree, is slated to be a speaker at Healthcare Informatics’ Cleveland Health IT Summit at the Hilton Cleveland Downtown on March 27 to 28. Among other topics, Dr. Kaelber will share MetroHealth’s road to success with regard to the health system’s MetroHealth Care Partners Accountable Care Organization, one of 30 successful Medicare Shared Savings Program (MSSP) ACOs. Healthcare Informatics’ Associate Editor Heather Landi caught up with Dr. Kaelber to discuss his top priorities right now as it relates to IT initiatives at MetroHealth, as well as the healthcare technology developments that are on his radar in 2018. Below are excerpts of that interview.

What are your top priorities right now?

The MetroHealth System is trying to move very quickly into value-based care instead of fee-for-service, and accelerate our move into more pay for performance. Within the MetroHealth system, we see that there is a huge technology catalyst that needs to occur to make that happen. So, it’s a big chunk around analytics, and within analytics, I’d put predictive analytics into that. We’re really trying to get our data processes better, as well as reporting and those predictive tools. The best data set or analytics or predictive tool is only as good as whether people can use that to make some change that otherwise wouldn’t be possible. So, analytics is not the end, it’s the end of the beginning. It’s necessary but not sufficient.

Webinar

Experience New Records for Speed & Scale: High Performance Genomics & Imaging

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI…

One hallmark that demonstrates our competence in that is, last year, we were one of a relatively small number of organizations that succeeded in our Medicare Shared Savings Program ACO (accountable care organization) contract. My view of that is it was informatics or health IT-enabled success. I think if I pulled out the informatics rug from underneath what the population health team was doing, I don’t think we would have been successful because most of what they did was enabled by both the analytics and the workflow tools that we put into our system, and these are the tools that all the providers as well as care coordinators and the population health team used to enable us to achieve that success. That has been a major push, and I think it continues to be a major push. We’re not doing the same population health that we were doing last year. Every year, not only is the bar being risen by the payers, but we’re also trying to get new contracts with more and more payers. So, I think that’s huge.

Another area I’d point to is patient engagement. We’re trying to be very aggressive with our personal health record. I think it’s particularly of note that we have a very diverse patient population, both socio-economically and educationally diverse patient population. Traditionally, you would expect that a personal health record for patient engagement might not be quite as high on the strategy. But, for us, we see that as a significant tool for the future.

We’ve implemented this thing called fast pass. Patients can already self-schedule most of their appointments, and many of their procedures, online. What the fast pass does is if another patient cancels an appointment for a time slot much earlier than your appointment, the system would automatically send you an email or text to let you know that the doctor has an opening, and if you want the earlier appointment, it will then automatically reschedule you. We’ve been live almost a year now. At this point we have almost 100 appointments being automatically re-self-scheduled per week. The average is about 23 days earlier that the appointment gets moved up. It’s a win-win for everybody. Patients like it because they get to see their doctor earlier. As a system, we like it, because it fills our schedules and it does it in an automated way.

You mentioned patient engagement efforts as it relates to the personal health record and the socio-economic diversity of your patient population. Why is that a challenge?

We published, back in the spring, an article in the Journal of the American Medical Informatics Association (JAMIA), where we saw a direct correlation between broadband internet access and MyChart sign-up and usage. We know that neighborhoods where the population is of lower socio-economic status, broadband access it’s not as prevalent. What our study showed is that if you live in a zip code where broadband service is less, then your sign-up for electronic health record portals also is less. (Editor’s note: The study concluded that the majority of adults with outpatient visits to a large urban health care system did not use the patient portal, and initiation of use was lower for racial and ethnic minorities, persons of lower socioeconomic status, and those without neighborhood broadband internet access. These results suggest the emergence of a digital divide in patient portal use. Find the full study here.)

Another area of interest that we’ve studied, but haven’t published any findings, is that we’ve found that while it may be true that in some lower socio-economic areas, home internet access is not as prevalent as other areas, smartphones seem to be relatively ubiquitous. And, that has helped inform our personal health record strategy. We’ve had a number of conversations about this with the Epic Corporation, and our view is perhaps the prioritization of features and functions of the smartphone should take priority over personal health record functions developed for desktops or laptops or for the website. The way I would frame that is, people of higher socio-economic status probably have good home internet access and a smartphone, so presumably, they could be using a website, desktop or the smartphone, versus other people who might not have easy access to home internet or desktops, but still probably have close to the same penetration of the smartphone.

Looking broadly at the health IT industry, what are the trends are you interested in, and what developments are you watching?

There are a couple of things. I still think this idea of big data, predictive analytics, I still say it gets a lot hype, and I think that “there is a there there somewhere.” But, as a CMIO, I can’t really live so much by “I hope it’s going to work,” I have to live in the world where I need to do things that I have high confidence will work. I still don’t know where to put my money on, in that space, to a large degree, so I’m watching that area a lot.

Health information exchange is another area I’m watching, and we’re trying do a lot in that area at MetroHealth. As we move more into the ACO and pay for performance space, I think the idea of having complete information on a patient becomes more and more important. At the MetroHealth system, we’re pretty well ahead of the game because we have all providers/patients on one system (the Epic electronic health record). But, even in that model, when we’ve looked at it, something like two-thirds to three quarters of our patients get at least some care today or historically have gotten some care at another healthcare system, and if we don’t have good insight into that care that occurs and if we don’t get as much discreet data about that care as possible, then we’re not providing the highest value care to the patient. We have a number of initiatives, with the Social Security Administration, with the VA (U.S. Department of Veterans Affairs) or through the eHealth exchange and the Sequoia Project, and now with our state-based HIE and our Epic-based HIE, to really try to get as much data as possible.

Then this is moving more into stage 3 of Meaningful Use, where there is this requirement to reconcile external information. We’re spending a lot of time figuring out how to do that in a seamless way. I think there are many opportunities to figure out how that could work better. At this point, I see patients where I might have a medication list of 20 medications from the Cleveland Clinic and I’m trying to mesh that with the 15 medications that I have in my healthcare system already and that takes a lot of time and energy to do that. On some level, it should be value-add because you want the complete medication list on a patient, but a lot of times, it ends up being a very inefficient use of a physician’s time. So how can we make that all work better?

I’m also interested in staff satisfaction. People talk about the triple aim of health care, and then that morphed into the quadruple aim, and the difference is that the fourth aim is around staff satisfaction. We’re really trying to think about how can we use the HER, not only to help with value-base care and population health to achieve things like decreased cost, improve the health of populations and improve patient experience, but, in addition to those three things, how can we also really improve the staff experience? There have been studies that show that providers and physicians are spending hours per day charting. How can we change that paradigm so that the EHR is really improving and enhancing, not only the patient experience, but also the staff experience in taking care of patients through the EHR? That’s an area that I am interested in as well.

The 40 Best Workplaces in Technology – Fortune

Beyond the Bitcoin Bubble – The New York Times

To see how enormous but also invisible the benefits of such protocols have been, imagine that one of those key standards had not been developed: for instance, the open standard we use for defining our geographic location, GPS. Originally developed by the United States military, the Global Positioning System was first made available for civilian use during the Reagan administration. For about a decade, it was largely used by the aviation industry, until individual consumers began to use it in car navigation systems. And now we have smartphones that can pick up a signal from GPS satellites orbiting above us, and we use that extraordinary power to do everything from locating nearby restaurants to playing Pokémon Go to coordinating disaster-relief efforts.

But what if the military had kept GPS out of the public domain? Presumably, sometime in the 1990s, a market signal would have gone out to the innovators of Silicon Valley and other tech hubs, suggesting that consumers were interested in establishing their exact geographic coordinates so that those locations could be projected onto digital maps. There would have been a few years of furious competition among rival companies, who would toss their own proprietary satellites into orbit and advance their own unique protocols, but eventually the market would have settled on one dominant model, given all the efficiencies that result from a single, common way of verifying location. Call that imaginary firm GeoBook. Initially, the embrace of GeoBook would have been a leap forward for consumers and other companies trying to build location awareness into their hardware and software. But slowly, a darker narrative would have emerged: a single private corporation, tracking the movements of billions of people around the planet, building an advertising behemoth based on our shifting locations. Any start-up trying to build a geo-aware application would have been vulnerable to the whims of mighty GeoBook. Appropriately angry polemics would have been written denouncing the public menace of this Big Brother in the sky.

But none of that happened, for a simple reason. Geolocation, like the location of web pages and email addresses and domain names, is a problem we solved with an open protocol. And because it’s a problem we don’t have, we rarely think about how beautifully GPS does work and how many different applications have been built on its foundation.

The open, decentralized web turns out to be alive and well on the InternetOne layer. But since we settled on the World Wide Web in the mid-’90s, we’ve adopted very few new open-standard protocols. The biggest problems that technologists tackled after 1995 — many of which revolved around identity, community and payment mechanisms — were left to the private sector to solve. This is what led, in the early 2000s, to a powerful new layer of internet services, which we might call InternetTwo.

For all their brilliance, the inventors of the open protocols that shaped the internet failed to include some key elements that would later prove critical to the future of online culture. Perhaps most important, they did not create a secure open standard that established human identity on the network. Units of information could be defined — pages, links, messages — but people did not have their own protocol: no way to define and share your real name, your location, your interests or (perhaps most crucial) your relationships to other people online.

This turns out to have been a major oversight, because identity is the sort of problem that benefits from one universally recognized solution. It’s what Vitalik Buterin, a founder of Ethereum, describes as “base-layer” infrastructure: things like language, roads and postal services, platforms where commerce and competition are actually assisted by having an underlying layer in the public domain. Offline, we don’t have an open market for physical passports or Social Security numbers; we have a few reputable authorities — most of them backed by the power of the state — that we use to confirm to others that we are who we say we are. But online, the private sector swooped in to fill that vacuum, and because identity had that characteristic of being a universal problem, the market was heavily incentivized to settle on one common standard for defining yourself and the people you know.

The self-reinforcing feedback loops that economists call “increasing returns” or “network effects” kicked in, and after a period of experimentation in which we dabbled in social-media start-ups like Myspace and Friendster, the market settled on what is essentially a proprietary standard for establishing who you are and whom you know. That standard is Facebook. With more than two billion users, Facebook is far larger than the entire internet at the peak of the dot-com bubble in the late 1990s. And that user growth has made it the world’s sixth-most-valuable corporation, just 14 years after it was founded. Facebook is the ultimate embodiment of the chasm that divides InternetOne and InternetTwo economies. No private company owned the protocols that defined email or GPS or the open web. But one single corporation owns the data that define social identity for two billion people today — and one single person, Mark Zuckerberg, holds the majority of the voting power in that corporation.

If you see the rise of the centralized web as an inevitable turn of the Cycle, and the open-protocol idealism of the early web as a kind of adolescent false consciousness, then there’s less reason to fret about all the ways we’ve abandoned the vision of InternetOne. Either we’re living in a fallen state today and there’s no way to get back to Eden, or Eden itself was a kind of fantasy that was always going to be corrupted by concentrated power. In either case, there’s no point in trying to restore the architecture of InternetOne; our only hope is to use the power of the state to rein in these corporate giants, through regulation and antitrust action. It’s a variation of the old Audre Lorde maxim: “The master’s tools will never dismantle the master’s house.” You can’t fix the problems technology has created for us by throwing more technological solutions at it. You need forces outside the domain of software and servers to break up cartels with this much power.

But the thing about the master’s house, in this analogy, is that it’s a duplex. The upper floor has indeed been built with tools that cannot be used to dismantle it. But the open protocols beneath them still have the potential to build something better.

One of the most persuasive advocates of an open-protocol revival is Juan Benet, a Mexican-born programmer now living on a suburban side street in Palo Alto, Calif., in a three-bedroom rental that he shares with his girlfriend and another programmer, plus a rotating cast of guests, some of whom belong to Benet’s organization, Protocol Labs. On a warm day in September, Benet greeted me at his door wearing a black Protocol Labs hoodie. The interior of the space brought to mind the incubator/frat house of HBO’s “Silicon Valley,” its living room commandeered by an array of black computer monitors. In the entrance hallway, the words “Welcome to Rivendell” were scrawled out on a whiteboard, a nod to the Elven city from “Lord of the Rings.” “We call this house Rivendell,” Benet said sheepishly. “It’s not a very good Rivendell. It doesn’t have enough books, or waterfalls, or elves.”

Benet, who is 29, considers himself a child of the first peer-to-peer revolution that briefly flourished in the late 1990s and early 2000s, driven in large part by networks like BitTorrent that distributed media files, often illegally. That initial flowering was in many ways a logical outgrowth of the internet’s decentralized, open-protocol roots. The web had shown that you could publish documents reliably in a commons-based network. Services like BitTorrent or Skype took that logic to the next level, allowing ordinary users to add new functionality to the internet: creating a distributed library of (largely pirated) media, as with BitTorrent, or helping people make phone calls over the internet, as with Skype.

Sitting in the living room/office at Rivendell, Benet told me that he thinks of the early 2000s, with the ascent of Skype and BitTorrent, as “the ‘summer’ of peer-to-peer” — its salad days. “But then peer-to-peer hit a wall, because people started to prefer centralized architectures,” he said. “And partly because the peer-to-peer business models were piracy-driven.” A graduate of Stanford’s computer-science program, Benet talks in a manner reminiscent of Elon Musk: As he speaks, his eyes dart across an empty space above your head, almost as though he’s reading an invisible teleprompter to find the words. He is passionate about the technology Protocol Labs is developing, but also keen to put it in a wider context. For Benet, the shift from distributed systems to more centralized approaches set in motion changes that few could have predicted. “The rules of the game, the rules that govern all of this technology, matter a lot,” he said. “The structure of what we build now will paint a very different picture of the way things will be five or 10 years in the future.” He continued: “It was clear to me then that peer-to-peer was this extraordinary thing. What was not clear to me then was how at risk it is. It was not clear to me that you had to take up the baton, that it’s now your turn to protect it.”

Protocol Labs is Benet’s attempt to take up that baton, and its first project is a radical overhaul of the internet’s file system, including the basic scheme we use to address the location of pages on the web. Benet calls his system IPFS, short for InterPlanetary File System. The current protocol — HTTP — pulls down web pages from a single location at a time and has no built-in mechanism for archiving the online pages. IPFS allows users to download a page simultaneously from multiple locations and includes what programmers call “historic versioning,” so that past iterations do not vanish from the historical record. To support the protocol, Benet is also creating a system called Filecoin that will allow users to effectively rent out unused hard-drive space. (Think of it as a sort of Airbnb for data.) “Right now there are tons of hard drives around the planet that are doing nothing, or close to nothing, to the point where their owners are just losing money,” Benet said. “So you can bring online a massive amount of supply, which will bring down the costs of storage.” But as its name suggests, Protocol Labs has an ambition that extends beyond these projects; Benet’s larger mission is to support many new open-source protocols in the years to come.

Why did the internet follow the path from open to closed? One part of the explanation lies in sins of omission: By the time a new generation of coders began to tackle the problems that InternetOne left unsolved, there were near-limitless sources of capital to invest in those efforts, so long as the coders kept their systems closed. The secret to the success of the open protocols of InternetOne is that they were developed in an age when most people didn’t care about online networks, so they were able to stealthily reach critical mass without having to contend with wealthy conglomerates and venture capitalists. By the mid-2000s, though, a promising new start-up like Facebook could attract millions of dollars in financing even before it became a household brand. And that private-sector money ensured that the company’s key software would remain closed, in order to capture as much value as possible for shareholders.

And yet — as the venture capitalist Chris Dixon points out — there was another factor, too, one that was more technical than financial in nature. “Let’s say you’re trying to build an open Twitter,” Dixon explained while sitting in a conference room at the New York offices of Andreessen Horowitz, where he is a general partner. “I’m @cdixon at Twitter. Where do you store that? You need a database.” A closed architecture like Facebook’s or Twitter’s puts all the information about its users — their handles, their likes and photos, the map of connections they have to other individuals on the network — into a private database that is maintained by the company. Whenever you look at your Facebook newsfeed, you are granted access to some infinitesimally small section of that database, seeing only the information that is relevant to you.

Running Facebook’s database is an unimaginably complex operation, relying on hundreds of thousands of servers scattered around the world, overseen by some of the most brilliant engineers on the planet. From Facebook’s point of view, they’re providing a valuable service to humanity: creating a common social graph for almost everyone on earth. The fact that they have to sell ads to pay the bills for that service — and the fact that the scale of their network gives them staggering power over the minds of two billion people around the world — is an unfortunate, but inevitable, price to pay for a shared social graph. And that trade-off did in fact make sense in the mid-2000s; creating a single database capable of tracking the interactions of hundreds of millions of people — much less two billion — was the kind of problem that could be tackled only by a single organization. But as Benet and his fellow blockchain evangelists are eager to prove, that might not be true anymore.

So how can you get meaningful adoption of base-layer protocols in an age when the big tech companies have already attracted billions of users and collectively sit on hundreds of billions of dollars in cash? If you happen to believe that the internet, in its current incarnation, is causing significant and growing harm to society, then this seemingly esoteric problem — the difficulty of getting people to adopt new open-source technology standards — turns out to have momentous consequences. If we can’t figure out a way to introduce new, rival base-layer infrastructure, then we’re stuck with the internet we have today. The best we can hope for is government interventions to scale back the power of Facebook or Google, or some kind of consumer revolt that encourages that marketplace to shift to less hegemonic online services, the digital equivalent of forswearing big agriculture for local farmers’ markets. Neither approach would upend the underlying dynamics of InternetTwo.

The first hint of a meaningful challenge to the closed-protocol era arrived in 2008, not long after Zuckerberg opened the first international headquarters for his growing company. A mysterious programmer (or group of programmers) going by the name Satoshi Nakamoto circulated a paper on a cryptography mailing list. The paper was called “Bitcoin: A Peer-to-Peer Electronic Cash System,” and in it, Nakamoto outlined an ingenious system for a digital currency that did not require a centralized trusted authority to verify transactions. At the time, Facebook and Bitcoin seemed to belong to entirely different spheres — one was a booming venture-backed social-media start-up that let you share birthday greetings and connect with old friends, while the other was a byzantine scheme for cryptographic currency from an obscure email list. But 10 years later, the ideas that Nakamoto unleashed with that paper now pose the most significant challenge to the hegemony of InternetTwo giants like Facebook.

The paradox about Bitcoin is that it may well turn out to be a genuinely revolutionary breakthrough and at the same time a colossal failure as a currency. As I write, Bitcoin has increased in value by nearly 100,000 percent over the past five years, making a fortune for its early investors but also branding it as a spectacularly unstable payment mechanism. The process for creating new Bitcoins has also turned out to be a staggering energy drain.

History is replete with stories of new technologies whose initial applications end up having little to do with their eventual use. All the focus on Bitcoin as a payment system may similarly prove to be a distraction, a technological red herring. Nakamoto pitched Bitcoin as a “peer-to-peer electronic-cash system” in the initial manifesto, but at its heart, the innovation he (or she or they) was proposing had a more general structure, with two key features.

First, Bitcoin offered a kind of proof that you could create a secure database — the blockchain — scattered across hundreds or thousands of computers, with no single authority controlling and verifying the authenticity of the data.

Second, Nakamoto designed Bitcoin so that the work of maintaining that distributed ledger was itself rewarded with small, increasingly scarce Bitcoin payments. If you dedicated half your computer’s processing cycles to helping the Bitcoin network get its math right — and thus fend off the hackers and scam artists — you received a small sliver of the currency. Nakamoto designed the system so that Bitcoins would grow increasingly difficult to earn over time, ensuring a certain amount of scarcity in the system. If you helped Bitcoin keep that database secure in the early days, you would earn more Bitcoin than later arrivals. This process has come to be called “mining.”

Photo

Credit Photo illustration by Delcan & Company. Source image: Koosen/Shutterstock.

For our purposes, forget everything else about the Bitcoin frenzy, and just keep these two things in mind: What Nakamoto ushered into the world was a way of agreeing on the contents of a database without anyone being “in charge” of the database, and a way of compensating people for helping make that database more valuable, without those people being on an official payroll or owning shares in a corporate entity. Together, those two ideas solved the distributed-database problem and the funding problem. Suddenly there was a way of supporting open protocols that wasn’t available during the infancy of Facebook and Twitter.

These two features have now been replicated in dozens of new systems inspired by Bitcoin. One of those systems is Ethereum, proposed in a white paper by Vitalik Buterin when he was just 19. Ethereum does have its currencies, but at its heart Ethereum was designed less to facilitate electronic payments than to allow people to run applications on top of the Ethereum blockchain. There are currently hundreds of Ethereum apps in development, ranging from prediction markets to Facebook clones to crowdfunding services. Almost all of them are in pre-alpha stage, not ready for consumer adoption. Despite the embryonic state of the applications, the Ether currency has seen its own miniature version of the Bitcoin bubble, most likely making Buterin an immense fortune.

These currencies can be used in clever ways. Juan Benet’s Filecoin system will rely on Ethereum technology and reward users and developers who adopt its IPFS protocol or help maintain the shared database it requires. Protocol Labs is creating its own cryptocurrency, also called Filecoin, and has plans to sell some of those coins on the open market in the coming months. (In the summer of 2017, the company raised $135 million in the first 60 minutes of what Benet calls a “presale” of the tokens to accredited investors.) Many cryptocurrencies are first made available to the public through a process known as an initial coin offering, or I.C.O.

The I.C.O. abbreviation is a deliberate echo of the initial public offering that so defined the first internet bubble in the 1990s. But there is a crucial difference between the two. Speculators can buy in during an I.C.O., but they are not buying an ownership stake in a private company and its proprietary software, the way they might in a traditional I.P.O. Afterward, the coins will continue to be created in exchange for labor — in the case of Filecoin, by anyone who helps maintain the Filecoin network. Developers who help refine the software can earn the coins, as can ordinary users who lend out spare hard-drive space to expand the network’s storage capacity. The Filecoin is a way of signaling that someone, somewhere, has added value to the network.

Advocates like Chris Dixon have started referring to the compensation side of the equation in terms of “tokens,” not coins, to emphasize that the technology here isn’t necessarily aiming to disrupt existing currency systems. “I like the metaphor of a token because it makes it very clear that it’s like an arcade,” he says. “You go to the arcade, and in the arcade you can use these tokens. But we’re not trying to replace the U.S. government. It’s not meant to be a real currency; it’s meant to be a pseudo-currency inside this world.” Dan Finlay, a creator of MetaMask, echoes Dixon’s argument. “To me, what’s interesting about this is that we get to program new value systems,” he says. “They don’t have to resemble money.”

Pseudo or not, the idea of an I.C.O. has already inspired a host of shady offerings, some of them endorsed by celebrities who would seem to be unlikely blockchain enthusiasts, like DJ Khaled, Paris Hilton and Floyd Mayweather. In a blog post published in October 2017, Fred Wilson, a founder of Union Square Ventures and an early advocate of the blockchain revolution, thundered against the spread of I.C.O.s. “I hate it,” Wilson wrote, adding that most I.C.O.s “are scams. And the celebrities and others who promote them on their social-media channels in an effort to enrich themselves are behaving badly and possibly violating securities laws.” Arguably the most striking thing about the surge of interest in I.C.O.s — and in existing currencies like Bitcoin or Ether — is how much financial speculation has already gravitated to platforms that have effectively zero adoption among ordinary consumers. At least during the internet bubble of late 1990s, ordinary people were buying books on Amazon or reading newspapers online; there was clear evidence that the web was going to become a mainstream platform. Today, the hype cycles are so accelerated that billions of dollars are chasing a technology that almost no one outside the cryptocommunity understands, much less uses.

Let’s say, for the sake of argument, that the hype is warranted, and blockchain platforms like Ethereum become a fundamental part of our digital infrastructure. How would a distributed ledger and a token economy somehow challenge one of the tech giants? One of Fred Wilson’s partners at Union Square Ventures, Brad Burnham, suggests a scenario revolving around another tech giant that has run afoul of regulators and public opinion in the last year: Uber. “Uber is basically just a coordination platform between drivers and passengers,” Burnham says. “Yes, it was really innovative, and there were a bunch of things in the beginning about reducing the anxiety of whether the driver was coming or not, and the map — and a whole bunch of things that you should give them a lot of credit for.” But when a new service like Uber starts to take off, there’s a strong incentive for the marketplace to consolidate around a single leader. The fact that more passengers are starting to use the Uber app attracts more drivers to the service, which in turn attracts more passengers. People have their credit cards stored with Uber; they have the app installed already; there are far more Uber drivers on the road. And so the switching costs of trying out some other rival service eventually become prohibitive, even if the chief executive seems to be a jerk or if consumers would, in the abstract, prefer a competitive marketplace with a dozen Ubers. “At some point, the innovation around the coordination becomes less and less innovative,” Burnham says.

The blockchain world proposes something different. Imagine some group like Protocol Labs decides there’s a case to be made for adding another “basic layer” to the stack. Just as GPS gave us a way of discovering and sharing our location, this new protocol would define a simple request: I am here and would like to go there. A distributed ledger might record all its users’ past trips, credit cards, favorite locations — all the metadata that services like Uber or Amazon use to encourage lock-in. Call it, for the sake of argument, the Transit protocol. The standards for sending a Transit request out onto the internet would be entirely open; anyone who wanted to build an app to respond to that request would be free to do so. Cities could build Transit apps that allowed taxi drivers to field requests. But so could bike-share collectives, or rickshaw drivers. Developers could create shared marketplace apps where all the potential vehicles using Transit could vie for your business. When you walked out on the sidewalk and tried to get a ride, you wouldn’t have to place your allegiance with a single provider before hailing. You would simply announce that you were standing at 67th and Madison and needed to get to Union Square. And then you’d get a flurry of competing offers. You could even theoretically get an offer from the M.T.A., which could build a service to remind Transit users that it might be much cheaper and faster just to jump on the 6 train.

How would Transit reach critical mass when Uber and Lyft already dominate the ride-sharing market? This is where the tokens come in. Early adopters of Transit would be rewarded with Transit tokens, which could themselves be used to purchase Transit services or be traded on exchanges for traditional currency. As in the Bitcoin model, tokens would be doled out less generously as Transit grew more popular. In the early days, a developer who built an iPhone app that uses Transit might see a windfall of tokens; Uber drivers who started using Transit as a second option for finding passengers could collect tokens as a reward for embracing the system; adventurous consumers would be rewarded with tokens for using Transit in its early days, when there are fewer drivers available compared with the existing proprietary networks like Uber or Lyft.

As Transit began to take off, it would attract speculators, who would put a monetary price on the token and drive even more interest in the protocol by inflating its value, which in turn would attract more developers, drivers and customers. If the whole system ends up working as its advocates believe, the result is a more competitive but at the same time more equitable marketplace. Instead of all the economic value being captured by the shareholders of one or two large corporations that dominate the market, the economic value is distributed across a much wider group: the early developers of Transit, the app creators who make the protocol work in a consumer-friendly form, the early-adopter drivers and passengers, the first wave of speculators. Token economies introduce a strange new set of elements that do not fit the traditional models: instead of creating value by owning something, as in the shareholder equity model, people create value by improving the underlying protocol, either by helping to maintain the ledger (as in Bitcoin mining), or by writing apps atop it, or simply by using the service. The lines between founders, investors and customers are far blurrier than in traditional corporate models; all the incentives are explicitly designed to steer away from winner-take-all outcomes. And yet at the same time, the whole system depends on an initial speculative phase in which outsiders are betting on the token to rise in value.

“You think about the ’90s internet bubble and all the great infrastructure we got out of that,” Dixon says. “You’re basically taking that effect and shrinking it down to the size of an application.”

Even decentralized cryptomovements have their key nodes. For Ethereum, one of those nodes is the Brooklyn headquarters of an organization called ConsenSys, founded by Joseph Lubin, an early Ethereum pioneer. In November, Amanda Gutterman, the 26-year-old chief marketing officer for ConsenSys, gave me a tour of the space. In our first few minutes together, she offered the obligatory cup of coffee, only to discover that the drip-coffee machine in the kitchen was bone dry. “How can we fix the internet if we can’t even make coffee?” she said with a laugh.

Planted in industrial Bushwick, a stone’s throw from the pizza mecca Roberta’s, “headquarters” seemed an unlikely word. The front door was festooned with graffiti and stickers; inside, the stairwells of the space appeared to have been last renovated during the Coolidge administration. Just about three years old, the ConsenSys network now includes more than 550 employees in 28 countries, and the operation has never raised a dime of venture capital. As an organization, ConsenSys does not quite fit any of the usual categories: It is technically a corporation, but it has elements that also resemble nonprofits and workers’ collectives. The shared goal of ConsenSys members is strengthening and expanding the Ethereum blockchain. They support developers creating new apps and tools for the platform, one of which is MetaMask, the software that generated my Ethereum address. But they also offer consulting-style services for companies, nonprofits or governments looking for ways to integrate Ethereum’s smart contracts into their own systems.

The true test of the blockchain will revolve — like so many of the online crises of the past few years — around the problem of identity. Today your digital identity is scattered across dozens, or even hundreds, of different sites: Amazon has your credit-card information and your purchase history; Facebook knows your friends and family; Equifax maintains your credit history. When you use any of those services, you are effectively asking for permission to borrow some of that information about yourself in order perform a task: ordering a Christmas present for your uncle, checking Instagram to see pictures from the office party last night. But all these different fragments of your identity don’t belong to you; they belong to Facebook and Amazon and Google, who are free to sell bits of that information about you to advertisers without consulting you. You, of course, are free to delete those accounts if you choose, and if you stop checking Facebook, Zuckerberg and the Facebook shareholders will stop making money by renting out your attention to their true customers. But your Facebook or Google identity isn’t portable. If you want to join another promising social network that is maybe a little less infected with Russian bots, you can’t extract your social network from Twitter and deposit it in the new service. You have to build the network again from scratch (and persuade all your friends to do the same).

The blockchain evangelists think this entire approach is backward. You should own your digital identity — which could include everything from your date of birth to your friend networks to your purchasing history — and you should be free to lend parts of that identity out to services as you see fit. Given that identity was not baked into the original internet protocols, and given the difficulty of managing a distributed database in the days before Bitcoin, this form of “self-sovereign” identity — as the parlance has it — was a practical impossibility. Now it is an attainable goal. A number of blockchain-based services are trying to tackle this problem, including a new identity system called uPort that has been spun out of ConsenSys and another one called Blockstack that is currently based on the Bitcoin platform. (Tim Berners-Lee is leading the development of a comparable system, called Solid, that would also give users control over their own data.) These rival protocols all have slightly different frameworks, but they all share a general vision of how identity should work on a truly decentralized internet.

What would prevent a new blockchain-based identity standard from following Tim Wu’s Cycle, the same one that brought Facebook to such a dominant position? Perhaps nothing. But imagine how that sequence would play out in practice. Someone creates a new protocol to define your social network via Ethereum. It might be as simple as a list of other Ethereum addresses; in other words, Here are the public addresses of people I like and trust. That way of defining your social network might well take off and ultimately supplant the closed systems that define your network on Facebook. Perhaps someday, every single person on the planet might use that standard to map their social connections, just as every single person on the internet uses TCP/IP to share data. But even if this new form of identity became ubiquitous, it wouldn’t present the same opportunities for abuse and manipulation that you find in the closed systems that have become de facto standards. I might allow a Facebook-style service to use my social map to filter news or gossip or music for me, based on the activity of my friends, but if that service annoyed me, I’d be free to sample other alternatives without the switching costs. An open identity standard would give ordinary people the opportunity to sell their attention to the highest bidder, or choose to keep it out of the marketplace altogether.

Gutterman suggests that the same kind of system could be applied to even more critical forms of identity, like health care data. Instead of storing, say, your genome on servers belonging to a private corporation, the information would instead be stored inside a personal data archive. “There may be many corporate entities that I don’t want seeing that data, but maybe I’d like to donate that data to a medical study,” she says. “I could use my blockchain-based self-sovereign ID to [allow] one group to use it and not another. Or I could sell it over here and give it away over there.”

The token architecture would give a blockchain-based identity standard an additional edge over closed standards like Facebook’s. As many critics have observed, ordinary users on social-media platforms create almost all the content without compensation, while the companies capture all the economic value from that content through advertising sales. A token-based social network would at least give early adopters a piece of the action, rewarding them for their labors in making the new platform appealing. “If someone can really figure out a version of Facebook that lets users own a piece of the network and get paid,” Dixon says, “that could be pretty compelling.”

Would that information be more secure in a distributed blockchain than behind the elaborate firewalls of giant corporations like Google or Facebook? In this one respect, the Bitcoin story is actually instructive: It may never be stable enough to function as a currency, but it does offer convincing proof of just how secure a distributed ledger can be. “Look at the market cap of Bitcoin or Ethereum: $80 billion, $25 billion, whatever,” Dixon says. “That means if you successfully attack that system, you could walk away with more than a billion dollars. You know what a ‘bug bounty’ is? Someone says, ‘If you hack my system, I’ll give you a million dollars.’ So Bitcoin is now a nine-year-old multibillion-dollar bug bounty, and no one’s hacked it. It feels like pretty good proof.”

Additional security would come from the decentralized nature of these new identity protocols. In the identity system proposed by Blockstack, the actual information about your identity — your social connections, your purchasing history — could be stored anywhere online. The blockchain would simply provide cryptographically secure keys to unlock that information and share it with other trusted providers. A system with a centralized repository with data for hundreds of millions of users — what security experts call “honey pots” — is far more appealing to hackers. Which would you rather do: steal a hundred million credit histories by hacking into a hundred million separate personal computers and sniffing around until you found the right data on each machine? Or just hack into one honey pot at Equifax and walk away with the same amount of data in a matter of hours? As Gutterman puts it, “It’s the difference between robbing a house versus robbing the entire village.”

So much of the blockchain’s architecture is shaped by predictions about how that architecture might be abused once it finds a wider audience. That is part of its charm and its power. The blockchain channels the energy of speculative bubbles by allowing tokens to be shared widely among true supporters of the platform. It safeguards against any individual or small group gaining control of the entire database. Its cryptography is designed to protect against surveillance states or identity thieves. In this, the blockchain displays a familial resemblance to political constitutions: Its rules are designed with one eye on how those rules might be exploited down the line.

Much has been made of the anarcho-libertarian streak in Bitcoin and other nonfiat currencies; the community is rife with words and phrases (“self-sovereign”) that sound as if they could be slogans for some militia compound in Montana. And yet in its potential to break up large concentrations of power and explore less-proprietary models of ownership, the blockchain idea offers a tantalizing possibility for those who would like to distribute wealth more equitably and break up the cartels of the digital age.

The blockchain worldview can also sound libertarian in the sense that it proposes nonstate solutions to capitalist excesses like information monopolies. But to believe in the blockchain is not necessarily to oppose regulation, if that regulation is designed with complementary aims. Brad Burnham, for instance, suggests that regulators should insist that everyone have “a right to a private data store,” where all the various facets of their online identity would be maintained. But governments wouldn’t be required to design those identity protocols. They would be developed on the blockchain, open source. Ideologically speaking, that private data store would be a true team effort: built as an intellectual commons, funded by token speculators, supported by the regulatory state.

What is KM? Knowledge Management Explained – KMWorld Magazine

The classic one-line definition of Knowledge Management was offered up by Tom Davenport early on (Davenport, 1994): “Knowledge Management is the process of capturing, distributing, and effectively using knowledge.” Probably no better or more succinct single-line definition has appeared since.

However, Knowledge Management can best and most quickly be explained by recapping its origins. Later in this article, its stages of development will also be recapped.

The Origins of KM

The concept and the terminology of KM sprouted within the management consulting community. When the Internet arose, those organizations quickly realized that an intranet, an in-house subset of the Internet, was a wonderful tool with which to make information accessible and to share it among the geographically dispersed units of their organizations. Not surprisingly, they quickly realized that in building tools and techniques such as dashboards, expertise locators, and best practice (lessons learned) databases, they had acquired an expertise which was in effect a new product that they could market to other organizations, particularly to organizations which were large, complex, and dispersed. However, a new product needs a name, and the name that emerged was Knowledge Management. The term apparently was first used in its current context at McKinsey in 1987 for an internal study on their information handling and utilization (McInerney and Koenig, 2011). KM went public, as it were, at a conference in Boston in 1993 organized by Ernst and Young (Prusak 1999). Note that Davenport was at E&Y when he wrote the definition above.

Those consulting organizations quickly disseminated the principles and the techniques of KM to other organizations, to professional associations, and to disciplines. The timing was propitious, as the enthusiasm for intellectual capital (see below) in the 1980s, had primed the pump for the recognition of information and knowledge as essential assets for any organization.

What is KM trying to accomplish?

Rich, Deep, and Open Communication

First, KM can very fruitfully be seen as the undertaking to replicate, indeed to create, the information environment known to be conducive to successful R&D—rich, deep, and open communication and information access—and to deploy it broadly across the firm. It is almost trite now to observe that we are in the post-industrial information age and that we are all information workers. Furthermore, the researcher is, after all, the quintessential information worker. Peter Drucker once commented that the product of the pharmaceutical industry wasn’t pills, it was information. The research domain, and in particular the pharmaceutical industry, has been studied in depth with a focus on identifying the organizational and cultural environmental aspects that lead to successful research (Koenig, 1990, 1992). The salient aspect that emerges with overwhelming importance is that of rich, deep, and open communications, not only within the firm, but also with the outside world. The logical conclusion, then, is to attempt to apply those same successful environmental aspects to knowledge workers at large, and that is precisely what KM attempts to do.

Situational Awareness

Second, Situational Awareness is a term only recently, beginning in 2015, used in the context of KM. The term, however, long precedes KM. It first gained some prominence in the cold war era when studies were commissioned by all of the major potential belligerents to try to identify what characteristics made a good fighter pilot. The costs of training a fighter pilot were huge, and if the appropriate characteristics leading to success could be identified, that training could be directed to the most appropriate candidates, and of those trained the most appropriate could be selected for front-line assignment. However, the only solid conclusion of those studies was that the salient characteristic of a good fighter pilot was excellent “situational awareness.” The problem was that no good predictive test for situational awareness could be developed.

The phrase then retreated into relative obscurity until it was resuscitated by Jeff Cooper, a firearms guru, and others in the context of self-defense. How do you defend and protect yourself? The first step is to be alert and to establish good situational awareness. From there the phrase entered the KM vocabulary. The role of KM is to create the capability for the organization to establish excellent situational awareness and consequently to make the right decisions.

A new definition of KM

A few years after the Davenport definition, the Gartner Group created another definition of KM, which has become the most frequently cited one (Duhon, 1998), and it is given below:

“Knowledge management is a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise’s information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers.”

The one real lacuna of this definition is that it, too, is specifically limited to an organization’s own information and knowledge assets. KM as conceived now, and this expansion arrived early on, includes relevant information assets from wherever relevant. Note, however, the breadth implied for KM by calling it a “discipline.”

Both definitions share a very organizational and corporate orientation. KM, historically at least, was primarily about managing the knowledge of and in organizations. Rather quickly, however, the concept of KM became much broader than that.

A graphic map of Knowledge Management

What is still probably the best graphic to try to set forth what constitutes KM, is the graphic developed by IBM for the use of their own KM consultants. It is based upon the distinction between collecting stuff (content) and connecting people. The presentation here includes some minor modifications, but the captivating C, E, and H mnemonics are entirely IBM’s:

Graphic Map of KM

COLLECTING (STUFF) &

CODIFICATION

CONNECTING (PEOPLE) &

PERSONALIZATION

DIRECTED

INFORMATION & KNOWLEDGE

SEARCH

EXPLOIT

  • Databases, external & internal
  • Content Architecture
  • Information Service Support (training required)
  • data mining best practices / lessons learned/after action analysis

(HARVEST)

  • community & learning
  • directories, “yellow pages” (expertise locators)
  • findings & facilitating tools, groupware
  • response teams

(HARNESS)

SERENDIPITY &

BROWSING

EXPLORE

  • Cultural support
  • current awareness profiles and databases
  • selection of items for alerting purposes / push
  • data mining best practices

(HUNTING)

  • Cultural support
  • spaces – libraries & lounges (literal & virtual), cultural support, groupware
  • travel & meeting attendance

(HYPOTHESIZE)

From: Tom Short, Senior consultant, Knowledge Management, IBM Global Services

(Note however the comments below under “Tacit.”)


OK, what does KM actually consist of?

In short, what are the operational components of a KM system? This is, in a way, the most straightforward way of explaining what KM is—to delineate what the operational components are that constitute what people have in mind when they talk about a KM system.

(1) Content Management

So what is involved in KM? The most obvious is the making of the organization’s data and information available to the members of the organization through dashboards, portals, and with the use of content management systems. Content Management, sometimes known as Enterprise Content Management, is the most immediate and obvious part of KM. For a wonderful graphic snapshot of the content management domain go to realstorygroup.com and look at their Content Technology Vendor Map. This aspect of KM might be described as Librarianship 101, putting your organization’s information and data up online, plus selected external information, and providing the capability to seamlessly shift to searching, more or less, the entire web. The term most often used for this is Enterprise Search. This is now not just a stream within the annual KMWorld Conference, but has become an overlapping conference in its own right. See the comments below under the “Third Stage of KM” section.

(2) Expertise Location

Since knowledge resides in people, often the best way to acquire the expertise that you need is to talk with an expert. Locating the right expert with the knowledge that you need, though, can be a problem, particularly if, for example, the expert is in another country. The basic function of an expertise locator system is straightforward: it is to identify and locate those persons within an organization who have expertise in a particular area. These systems are now commonly known as expertise location systems. In the early days of KM the term ‘Yellow Pages” was commonly used, but now that term is fast disappearing from our common vocabulary, and expertise location is, in any case, rather more precise.

There are typically three sources from which to supply data for an expertise locator system: (1) employee resumes, (2) employee self-identification of areas of expertise (typically by being requested to fill out a form online), and (3) algorithmic analysis of electronic communications from and to the employee. The latter approach is typically based on email traffic but can include other social networking communications such as Twitter, Facebook, and Linkedin. Several commercial software packages to match queries with expertise are available. Most of them have load-balancing schemes so as not to overload any particular expert. Typically such systems rank the degree of presumed expertise and will shift a query down the expertise ranking when the higher choices appear to be overloaded. Such systems also often have a feature by which the requester can flag the request as a priority, and the system can then match high priority to high expertise rank.

(3) Lessons Learned

Lessons Learned databases are databases that attempt to capture and make accessible knowledge, typically “how to do it” knowledge, that has been operationally obtained and normally would not have been explicitly captured. In the KM context, the emphasis is upon capturing knowledge embedded in personal expertise and making it explicit. The lessons learned concept or practice is one that might be described as having been birthed by KM, as there is very little in the way of a direct antecedent. Early in the KM movement, the phrase most often used was “best practices,” but that phrase was soon replaced with “lessons learned.” The reasons were that “lessons learned” was a broader and more inclusive term and because “best practice” seemed too restrictive and could be interpreted as meaning there was only one best practice in a situation. What might be a best practice in North American culture, for example, might well not be a best practice in another culture. The major international consulting firms were very aware of this and led the movement to substitute the new more appropriate term. “Lessons Learned” became the most common hallmark phrase of early KM development.

The idea of capturing expertise, particularly hard-won expertise, is not a new idea. One antecedent to KM that we have all seen portrayed was the World War II debriefing of pilots after a mission. Gathering military intelligence was the primary purpose, but a clear and recognized secondary purpose was to identify lessons learned, though they were not so named, to pass on to other pilots and instructors. Similarly, the U. S. Navy Submarine Service, after a very embarrassing and lengthy experience of torpedoes that failed to detonate on target, and an even more embarrassing failure to follow up on consistent reports by submarine captains of torpedo detonation failure, instituted a mandatory system of widely disseminated “Captain’s Patrol Reports.” The intent, of course, was to avoid any such fiasco in the future. The Captain’s Patrol Reports, however, were very clearly designed to encourage analytical reporting, with reasoned analyses of the reasons for operational failure and success. It was emphasized that a key purpose of the report was both to make recommendations about strategy for senior officers to mull over, and recommendations about tactics for other skippers and submariners to take advantage of (McInerney and Koenig, 2011).

The military has become an avid proponent of the lessons learned concept. The phrase the military uses is “After Action Reports.” The concept is very simple: make sure that what has been learned from experience is passed on, and don’t rely on the participant to make a report. There will almost always be too many things immediately demanding that person’s attention after an action. There must be a system whereby someone, typically someone in KM, is assigned the responsibility to do the debriefing, to separate the wheat from the chaff, to create the report, and then to ensure that the lessons learned are captured and disseminated. The experiences in Iraq, Afghanistan, and Syria have made this process almost automatic in the military.

The concept is by no means limited to the military. Larry Prusak (2004) maintains that in the corporate world the most common cause of KM implementation failure is that so often the project team is disbanded and the team members almost immediately reassigned elsewhere before there is any debriefing or after-action report assembled. Any organization where work is often centered on projects or teams needs to pay very close attention to this issue and set up an after-action mechanism with clearly delineated responsibility for its implementation.

A particularly instructive example of a “lesson learned” is one recounted by Mark Mazzie (2003), a well known KM consultant. The story comes from his experience in the KM department at Wyeth Pharmaceuticals. Wyeth had recently introduced a new pharmaceutical agent intended primarily for pediatric use. Wyeth expected it to be a notable success because, unlike its morning, noon, and night competitors, it needed to be administered only once a day, and that would make it much easier for the caregiver to ensure that the child followed the drug regimen, and it would be less onerous for the child. Sales of the drug commenced well but soon flagged. One sales rep (what the pharmaceutical industry used to call detail men), however, by chatting with her customers, discovered the reason for the disappointing sales and also recognized the solution. The problem was that kids objected strenuously to the taste of the drug, and caregivers were reporting to prescribing physicians that they couldn’t get their kid to continue taking the drug, so the old stand-by would be substituted. The simple solution was orange juice, a swig of which quite effectively masked the offensive taste. If the sales rep were to explain to the physician that the therapy should be conveyed to the caregiver as the pill and a glass of orange juice taken simultaneously at breakfast, then there was no dissatisfaction and sales were fine.

The obvious question that arises is what is there to encourage the sales rep to share this knowledge? The sales rep is compensated based on salary (small), and bonus (large). If she shares the knowledge, she jeopardizes the size of her bonus, which is based on her comparative performance.

This raises the issue, discussed below, that KM is much more than content management. It extends to how does one structures the organizational culture to facilitate and encourage knowledge sharing, and that extends to how one structures the organization’s compensation scheme.

The implementation of a lessons learned system is complex both politically and operationally. Many of the questions surrounding such a system are difficult to answer. Are employees free to submit to the system un-vetted? Who, if anyone, is to decide what constitutes a worthwhile lesson learned? Most successful lessons learned implementations have concluded that such a system needs to be monitored and that there needs to be a vetting and approval mechanism for items that are posted as lessons learned.

How long do items stay in the system? Who decides when an item is no longer salient and timely? Most successful lessons learned systems have an active weeding or stratification process. Without a clearly designed process for weeding, the proportion of new and crisp items inevitably declines, the system begins to look stale, and usage and utility falls. Deletion, of course, is not necessarily loss and destruction. Using carefully designed stratification principles, items removed from the foreground can be archived and moved to the background but still made available. However, this procedure needs to be in place before things start to look stale, and a good taxonomically based retrieval system needs to be created.

These questions need to be carefully thought out and resolved, and the mechanisms designed and put in place, before a lessons-learned system is launched. Inattention can easily lead to failure and the creation of a bad reputation that will tar subsequent efforts.

(4) Communities of Practice (CoPs)

CoPs are groups of individuals with shared interests that come together in person or virtually to tell stories, to share and discuss problems and opportunities, discuss best practices, and talk over lessons learned (Wenger, 1998; Wenger & Snyder, 1999). Communities of practice emphasize, build upon, and take advantage of the social nature of learning within or across organizations. In small organizations, conversations around the water cooler are often taken for granted, but in larger, geographically distributed organizations, the water cooler needs to become virtual. Similarly, organizations find that when workers relinquish a dedicated company office to work online from home or on the road, the natural knowledge sharing that occurs in social spaces needs to be replicated virtually. In the context of KM, CoPs are generally understood to mean electronically linked communities. Electronic linkage is not essential, of course, but since KM arose in the consulting community from the awareness of the potential of intranets to link geographically dispersed organizations, this orientation is understandable.

A classic example of the deployment of CoPs comes from the World Bank. When James Wolfensohn became president in 1995, he focused on the World Bank’s role in disseminating knowledge about development; he was known to say that the principal product of the World Bank was not loans, but rather the creation of knowledge about how to accomplish development. Consequently, he encouraged the development of CoPs and made that a focus of his attention. One World Bank CoP, for example, was about road construction and maintenance in arid countries and conditions. That CoP was encouraged to include and seek out not only participants and employees from the World Bank and its sponsored projects and from the country where the relevant project was being implemented, but also experts from elsewhere who had expertise in building roads in arid conditions, such as, for example, staff from the Australian Road Research Board and the Arizona Department of Highways. This is also a good example of the point that despite the fact that KM developed first in a very for-profit corporate context, it is applicable far more broadly, such as in the context of government and civil society.

The organization and maintenance of CoPs is not a simple or an easy task to undertake. As Durham (2004) points out, there are several key roles to be filled. She describes the key roles as manager, moderator, and thought leader. They need not necessarily be three separate people, but in some cases they will need to be. Some questions that need to be thought about and resolved are:

  • Who fills the various roles of: manager, moderator, and thought leader?
  • How is the CoP managed, and who will fill the management role?
  • Who will have overall responsibility for coordinating and overseeing the various CoPs?
  • Who looks for new members or suggests that the CoP may have outlived its usefulness?
  • Who reviews the CoP for activity?
  • Are postings open or does someone vet or edit the postings?
  • How is the CoP kept fresh and vital?
  • When and how (under what rules) are items removed?
  • How are those items archived?
  • How are the CoP files made retrievable? How do CoP leaders coordinate with the enterprise search/taxonomy function?

Another way to view KM is to look at the stages of KM’s Development

First Stage of KM: Information Technology

KM was initially driven primarily by IT, information technology, and the desire to put that new technology, the Internet, to work and see what it was capable of. That first stage has been described using a horse breeding metaphor as “by the internet out of intellectual capital,” the sire and the dam. The concept of intellectual capital, the notion that not just physical resources, capital, and manpower, but also intellectual capital (knowledge) fueled growth and development, provided the justification, the framework, and the seed. The availability of the internet provided the tool. As described above, the management consulting community jumped at the new capabilities provided by the Internet, using it first for themselves, realizing that if they shared knowledge across their organization more effectively they could avoid reinventing the wheel, underbid their competitors, and make more profit. The central point is that the first stage of KM was about how to deploy that new technology to accomplish more effective use of information and knowledge.

The first stage might be described as the “If only Texas Instruments knew what Texas Instruments knew” stage, to revisit a much quoted KM mantra. The hallmark phrase of Stage 1 was first “best practices,” later replaced by the more politic “lessons learned.”

Second Stage of KM: HR and Corporate Culture

Within a few years the second stage of KM emerged when it became apparent that simply deploying new technology was not sufficient to effectively enable information and knowledge sharing. It became obvious that human and cultural dimensions needed to be incorporated. The second stage can be described as the “‘If you build it they will come’ is a fallacy” stage. In other words, there was the recognition that “If you build it they will come” is a recipe that can easily lead to quick and embarrassing failure if human factors are not sufficiently taken into account.

It became clear that KM implementation would involve changes in the corporate culture, in many cases rather significant changes. Consider the case above of the new pediatric medicine and the discovery of the efficacy of adding orange juice to the recipe. Pharmaceutical sales reps are compensated primarily not by salary, but by bonuses based on sales results. What is in it for that sales rep to share her new discovery when the most likely result is that next year her bonus would be substantially reduced? The changes needed in corporate culture to facilitate and encourage information and knowledge sharing can be major and profound. KM therefore extends far beyond just structuring information and knowledge and making it more accessible. In particular, the organizational culture needs to be examined in terms of how it rewards information and knowledge sharing. In many cases the examination will reveal that the culture needs to be modified and enriched. Often this will involve examining and modifying how the compensation scheme rewards information and knowledge sharing.

This implies a role for KM that very few information professionals have had to be involved with in the past. The implication is clear that KM leaders should be involved in the decision making process for designing the organization’s compensation policy, a process that is very sensitive politically and fraught with difficulty.

A major component of this second stage was the design of easy-to-use and user-friendly systems. The metaphor that was used was that the interface, the Desktop Interface, should appear almost intuitively obvious, like the dashboard of an automobile. (This was of course before the proliferation of chips in automobiles and the advent of user manuals that were inches thick.) Human factors design became an important component of KM.

As this recognition of the importance of human factors unfolded, two major themes from the business literature were brought into the KM domain. The first of these was Senge’s work on the learning organization (Senge, Peter M., 1990 The Fifth Discipline: The Art and Practice of the Learning Organization.) The second was Nonaka’s work on “tacit” knowledge and how to discover and cultivate it (Nonaka, Ikujiro & Takeuchi, Hirotaka, 1995 The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation.) Both were not only about the human factors of KM implementation and use; they were also about knowledge creation as well as knowledge sharing and communication. The hallmark phrase of Stage 2 was “communities of practice,” CoPs. A good indicator of the shift from the first to the second stage of KM is that for the 1998 Conference Board conference on KM, there was, for the first time, a noticeable contingent of attendees from HR, human resource departments. By the next year, 1999, HR was the largest single group, displacing IT attendees from first place.

Third Stage of KM: Taxonomy and Content Management

The third stage developed from the awareness of the importance of content, and in particular the awareness of the importance of the retrievability of that content, and therefore the importance of the arrangement, description, and the syndetic structure of that content. Since a good alternative description for the second stage of KM is the “it’s no good if they don’t use it” stage, then in that vein, perhaps the best description for the third stage is the “it’s no good if they try to use it but can’t find it” stage. Another bellwether is that TFPL’s report of their October 2001 CKO (Chief Knowledge Officer) Summit reported that for the first time taxonomies emerged as a topic, and it emerged full blown as a major topic (TFPL, 2001 Knowledge Strategies – Corporate Strategies.) The hallmark phrases emerging for the third stage are content management (or enterprise content management) and taxonomies. At the KMWorld 2000 Conference, a track on Content Management appeared for the first time, and by the 2001 KMWorld Conference, Content Management had become the dominant track. In 2006, KMWorld added a two-day workshop entitled Taxonomy Boot Camp, which not only still continues today, and is a day longer, but has also expanded to international locations. The hallmark terms for the third stage of KM are taxonomy and content.

The third stage continues today and is expanding. A major theme now is “data analytics” and “machine learning” for “enterprise search.” The crux is to be able to effectively manage and retrieve your data without maintaining a stable full of taxonomists. A good recent example derives from Rolls Royce’s sale of a subsidiary company. The buyer was entitled to voluminous files of company records, but how was Rolls Royce to separate those from other records with valuable proprietary data that was not part of the sale, and that Rolls Royce wanted to maintain as proprietary knowledge, amidst a sea of structured and unstructured data? The answer was a major project to taxonomize, organize, index, and retrieve massive amounts of data and records. Data analytics and machine learning were powerful tools to help accomplish this, but notice the word “help.” Those tools will not replace the need for good and intelligent human guidance, training, and oversight.

The author is reminded of an occurrence some years ago at Mitre Corporation, a very information driven organization, before the term KM was even bruited about, when the organization wanted to create an expertise location “file/database.” A senior Vice President’s first thought was that just putting the employee’s resumes online would suffice. A short demonstration convinced him otherwise. One sample search was for employees with expertise in “Defense Logistics,” a topic relevant to an RFP that Mitre was bidding on. The clincher was a resume containing neither word, but with the phase “battlefield re-supply.”

A good idea is to browse the KMWorld website (kmworld.com) for current reports, “white papers,” webinars, etc. on topics such as “text analytics” or “big data” or “cognitive computing.”

Yet one more definition/description

The late 20th Century, extending into the 21st Century, was characterized by an almost continuous stream of information and knowledge-related topics and enthusiasms.

Below is a list of those enthusiasms, in roughly chronological order, with the earlier at the top of the list. In some cases, where it is today not so obvious from the name, there is a brief description of what the topic or the enthusiasm consisted of.

  • Minimization of Unallocated Cost—the thesis that for optimal decision making costs should be accurately allocated to products, and as data processing became an ever more important and larger component of an organization’s expenses, it was usually treated as G&A, General and Administrative expenses, and a lower and lower proportion of organizational budgets were clearly allocated to their products, and decision making was suffering.
  • I.T. and Productivity
  • Data Driven System Design—the thesis that the basis of good system design was not the classic if-then-else flow chart, but was based on the charting of procedures and information flow within the organization, and only then came the if-then-else detailed analysis. This drove a dramatically increased recognition of the importance of data and information in systems design and in an organization’s operations in general.
  • Decision Analysis—the addition to the basic systems analysis construct that there is often an unrecognized and unanalyzed option in any decision situation—i.e., to go back and deploy some resources to get better information, and then return and consequently have a better chance of making the optimal decision. Further, one can, in principle, compare the cost of better information with the expected value of a better decision.
  • Information Systems Stage Hypotheses–there was a profusion of stage hypotheses in the late 1970s and early 1980s: by Nolan, Rockart, Marchand, Gibson & Jackson, Koenig, and Zachman. All had ramifications for, and were for the most part publicized for, the implementation and management of I.T.
  • Managing the Archipelago (of Information Services–the thesis that for primarily historical reasons the information handling responsibilities in an organization are usually administratively scattered like the islands in an archipelago, and that this creates severe administrative and managerial problems, and that only very senior management is in a position to get a handle on the problem, and that it needs to very consciously address the issue.
  • I.T. as Competitive Advantage
  • Management Information Systems (MIS) to Decision Support Systems (DSS)–the recognition that the disenchantment with MIS was caused primarily by its focus solely on internal information. and that most management decisions depended more on external information, and that DSSs needed to address the issue of access to external information and knowledge.
  • Enterprise-Wide Information Analysis—this was IBM’s mantra for promotion to their customers’ senior management that, to be successful, an organization 1) had to define what its enterprise really consisted of, and determine what business it was really in; 2) then it had to analyze and define what decisions it had to make correctly to be successful in that enterprise; 3) then it had to analyze what information it needed to make those decisions correctly, and obtain and process that information.
  • Information Resource Management—the concept that information was not only a resource, but was also often a product. The Paperwork Reduction Act mandated that all government agencies appoint a senior administrator in charge of Information Resource Management.
  • I.T. and Organizational Structure
  • Total Quality Management (TQM) and Benchmarking
  • Competitive Intelligence (CI)
  • I.T. and the Shift from Hierarchies to Markets–the that better I.T. inevitably shifts the optimal effectiveness trade-off point toward the market end of the central planning to market economy spectrum.
  • Business Process Re-Engineering
  • Core Competencies
  • Data Warehousing and Data Mining (more recently known as Big Data)
  • E-Business
  • Intellectual Capital
  • Knowledge Management
  • Enterprise Resource Planning (ERP)—the not very obvious name for the idea of integrating all your business’s I.T. operations under one software suite.
  • Customer Relationship Management (CRM)
  • Supply Chain Management (SCM)
  • Enterprise Content Management (ECM)

The list is impressively long, and all these topics and enthusiasms are related to the management of information and knowledge, or the management of information processing functions. It would be very hard to come up with a very much shorter list of management topics and enthusiasms of the same era that were not related to the management of information and knowledge or to the management of information processing functions.

If the list is so long, and they have so major a theme in common, has there not been some recognition that all these trees constitute a forest? Well, there was (Koenig, 2000), and it was called “information driven management,” the name put forward for the “forest” at the time , but it received comparatively little exposure or momentum.

One interesting way to look at KM is that, in fact, KM has expanded to become and is now the recognition of that forest of trees (McInerney and Koenig, 2011), that KM is a much better and more recognized name than “information driven management.” It is interesting that this stream of trees, to mix metaphors, has dwindled dramatically since the appearance of KM as an important topic. It can further be argued that the typical new topic or enthusiasm, the cloud and big data for example, can be seen as emerging from within KM.

Other KM Issues

Tacit Knowledge

The KM community uses the term “tacit knowledge” to mean what is not “explicit knowledge,” and in that usage what is usually meant by “tacit” is implicit knowledge, that which is not explicit or formally captured in some fashion, most obviously the knowledge in people’s heads. A more useful and nuanced categorization is explicit, implicit, and tacit. There is indeed tacit knowledge which only resides in someone’s head. Nonaka uses the story of the tacit knowledge that was necessary to develop a home bread maker. To understand what was needed to design a machine to knead dough properly, it was necessary for the engineers to work with bread makers to get the feel for how the dough needed to be manipulated.

But frankly the extent of knowledge that is truly tacit, like how to get up on water skis, that overlaps with the interests of KM systems is rather small. What is often very extensive is the amount of implicit information that could have been made explicit, but has not been. That it has not been is usually not a failure, but usually simply a cost-effective decision, usually taken unconsciously, that it is not worth the effort. The danger lies in assuming that explicit information is addressed by “collecting” and tacit information by “connecting,” and not examining whether there is potentially important implicit information that could and should be made explicit. The after action comments above under Lessons Learned illustrate this important point.

Knowledge Retention and Retirees

One long standing KM issue is the need to retain the knowledge of retirees. The fact that the baby boomer bulge is now reaching retirement age is making this issue increasingly important. KM techniques are very relevant to this issue. The most obvious technique is the application of the lessons learned idea—just treat the retiree’s career as a long project that is coming to its end and create an after action report, a massive data dump. This idea seems straightforward enough, and debriefing the retiree and those with whom he works closely about what issues they perceive as likely to surface or that could possibly arise is obvious common sense. But only in special cases is the full data dump approach likely to be very useful. When a current employee has a knowledge need, is he or she likely to look for the information, and if so how likely is it that the employee’s search request will map onto the information in the retiree’s data dump?

Much more likely to be useful is to keep the retiree involved, maintaining him or her in the CoPs, involved in the discussions concerning current issues, and findable through expertise locator systems. The real utility is likely to be found not directly in the information that the retiree leaves behind, but in new knowledge created by the interaction of the retiree with current employees. The retiree, in response to a current issue says “it occurs to me that …” and elicits a response something like “yes, the situation is somewhat similar , but here …,” a discussion unfolds, the retiree contributes some of the needed expertise, and a solution is generated. The solution arises partially from the retiree’s knowledge, but more from the interaction.

The Scope of KM

Another major development is the expansion of KM beyond the 20th century vision of KM as the organization’s knowledge as described in the Gartner Group definition of KM. Increasingly KM is seen as ideally encompassing the whole bandwidth of information and knowledge likely to be useful to an organization, including knowledge external to the organization—knowledge emanating from vendors, suppliers, customers, etc., and knowledge originating in the scientific and scholarly community, the traditional domain of the library world. Looked at in this light, KM extends into environmental scanning and competitive intelligence.

The additional definition of KM above, “Yet One More Definition of KM,” the forest of the trees, also makes the case that the definition of KM is now very broad.

Is KM here to stay?

The answer certainly appears to be yes. The most compelling analysis is the bibliometric one, simply counting the number of articles in the business literature and comparing that to other business enthusiasms. Most business enthusiasms grow rapidly and reach a peak after about five years, and then decline almost as rapidly as they grew.

Below are the graphs for three hot management topics (or fads) of recent years:

Quality Circles, 1977-1986

Source: Abrahamson ,1996

Total Quality Management, 1990-2001 - Source: Ponzi & Koenig, 2002
Total Quality Management, 1990-2001

Source: Ponzi & Koenig, 2002

Business Process Reengineering, 1990-2001 - Source: Ponzi & Koenig, 2002

Business Process Reengineering, 1990-2001

Source: Ponzi & Koenig, 2002

KM looks dramatically different:

A articles in the business literature with the phrase “Knowledge Management” in the title.
This graph charts the number of articles in the business literature with the phrase “Knowledge Management” in the title.

If we chart the number of articles in the business literature with the phrase “Knowledge Management” or the abbreviation “KM” in the title, we get the chart below, with an order of magnitude more literature:

KM Growth 2001-2011

It does indeed look as though KM is no mere enthusiasm; KM is here to stay.

References

Abrahamson, E. & Fairchild, G. (1999). Management fashion: lifecycles, triggers, and collective learning processes. Administrative Science Quarterly, 44, 708-740.

Davenport, Thomas H. (1994), Saving IT’s Soul: Human Centered Information Management. Harvard Business Review, March-April, 72 (2)pp. 119-131. Duhon, Bryant (1998), It’s All in our Heads. Inform, September, 12 (8).

Durham, Mary. (2004). Three Critical Roles for Knowledge Management Workspaces. In M.E.D. Koenig & T. K. Srikantaiah (Eds.), Knowledge Management: Lessons Learned: What Works and What Doesn’t. (pp. 23-36). Medford NJ: Information Today, for The American Society for Information Science and Technology.

Koenig, M.E.D. (1990) Information Services and Downstream Productivity. In Martha E. Williams (Ed.), Annual Review of Information Science and Technology: Volume 25, (pp. 55-56). New York, NY: Elsevier Science Publishers for the American Society for Information Science.

Koenig, M.E.D. (1992). The Information Environment and the Productivity of Research. In H. Collier (Ed.), Recent Advances in Chemical Information, (pp. 133-143). London: Royal Society of Chemistry. Mazzie, Mark. (2003). Personal Communication.

Koenig, M, E. D. (2000), The Evolution of Knowledge Management, in T. K. Srikantaiah and M. E. D. Koenig, Knowledge Management for the Information Professional. (pp. 23-26), Medford N.J., Information Today, for the American Society for Information Science.

McInerney, Claire, and Koenig, Michael E. D., (2011), Knowledge Management (KM) Processes in Organizations: Theoretical Foundations and Practice, Morgan and Claypool.

Nonaka, I. & Takeuchi, H. (1995). The knowledge creating company: How Japanese Companies Create the Dynamics of Innovation. New York: Oxford University Press.

Ponzi, Leonard., & Koenig, M.E.D. (2002). Knowledge Management: Another Management Fad?” Information Research, 8(1). Retrieved from http://informationr.net/ir/8-1/paper145.html

Ponzi, L., & Koenig, M.E.D. (2002). Knowledge Management: Another Management Fad?”, Information Research, 8(1). Retrieved from http://informationr.net/ir/8-1/paper145.html

Prusak, Larry. (1999). Where did Knowledge Management Come From?. Knowledge Directions, 1(1), 90-96. Prusak, Larry. (2004). Personal Communication.

Senge, Peter M.. (1990). The Fifth Discipline: The Art & Practice of the Learning Organization. New York, NY: Doubleday Currency.

Wenger, Etienne C. (1998). Communities of practice: Learning, meaning and identity. Cambridge: Cambridge University Press.

Wenger, Etienne C. & Snyder, W. M. (1999). Communities of practice: The organizational frontier. Harvard Business Review, 78(1), 139-145.

About the Author

Michael E.D. Koenig, Ph.D, is the author or co-author of a number of books on KM, including Knowledge Management in Practice (www.infotoday.com), and numerous articles on the subject of KM. He is Professor Emeritus at Long Island University and is the former and founding dean of the College of Information and Computer Science. In 2015 he received the Award of Merit from the Association for Information Science and Technology, the association’s highest award.

A “Win” for Health Information Exchange? How the Patient-Centered Data Home Could Shift the Narrative – Healthcare Informatics

“Living well” would be an accurate way to describe the experiences of tourists who visit San Diego, often for its miles of white-sand beaches and amazing weather. But behind the scenes, too, city healthcare leaders have been working hard on their own version of living well.

Indeed, a strategic vision known as “Live Well San Diego”—the city’s 10-year plan to focus on building better health, living safely and thriving—has provided a foundational base for how healthcare in San Diego should be imagined. Essentially, the strategy aligns the efforts of individuals, organizations and government to help all 3.3 million San Diego County residents live well, the region’s health officials say.

As Nick Yphantides, M.D., the chief medical officer for San Diego County’s medical care services division, puts it in a recent sit-down interview with Healthcare Informatics, “It’s not just about healthcare delivery, but it’s about the context and environment in which that delivery occurs.” Expanding on that, Yphantides notes that the key components for Live Well San Diego are indeed health, safety, and thriving, and within these larger buckets are critical care considerations such as: economic development, vitality, social economic factors, social determinants of health, preparedness and security, and finally, being proactive in one’s care.

So far, through the Live Well San Diego initiative, the city has created more than 8,000 healthcare jobs over a five-year span and more than 1.2 million square feet of additional hospital space, according to a 2017 report on Southern California’s growing healthcare industry.

From here, the attention has turned to improving the data sharing infrastructure in the city, a significant patient care challenge that is not unique to San Diego, but nonetheless critical to the evolution of any healthcare market that is progressing toward a value-based care future. To this end, toward the end of 2016, ConnectWellSD, a county-wide effort to put Live Well San Diego into action, was launched with the aim to improve access to county health services, serving as a “one-stop-shop” for customer navigation. Officials note that while still in the early stages of development, ConnectWellSD will implement new technologies that will allow users to perform functions such as looking up a customer file, making and managing referrals, or sharing case notes.

Carrie Hoff, ConnectWellSD’s deputy director, says the impetus behind the web portal’s launching was the need to pull disparate data together to have a fuller view of how the individual is being serviced, in compliance with privacy and confidentiality. “Rounding up that picture sets ourselves up to collaborate across disciplines in a more streamlined way,” Hoff says.

Moving forward, with the ultimate goal of “whole-person centricity” in mind, San Diego health officials envision a future in which ConnectWellSD, along with San Diego Health Connect (SDHC)—the metro area’s regional health information exchange (HIE)—and the area’s “2-1-1 agency,” which houses San Diego’s local community information exchange (CIE), all work in cohesion to create a longitudinal record that promotes a proactive, holistic, person-centered system of care.

Yet as it stands today, “From a data ecosystem perspective, San Diego is still a work in progress,” Yphantides acknowledges. “But we’re looking to really be a data-driven, quantified, and outcome-based environment,” he says.

To this end, SDHC is an information-sharing network that’s widely considering one of the most advanced in the country. Once federally funded, SDHC is now sustained by its hospital and other patient care organization participants, and according to a recent newsletter, in total, the HIE has contracted with 19 hospitals, 17 FQHCs (federally qualified health centers), three health plans and two public health agencies.

The regional HIE was shown to prove its value during last year’s tragic hepatitis A outbreak in San Diego County amongst the homeless population that resulted in 592 public health cases and 20 deaths spanning over a period of a little less than one year. In an interview with Healthcare Informatics Editor-in-Chief Mark Hagland late last year, Dan Chavez, executive director of SDHC, noted that the broad reach of his HIE turned out to be quite helpful during this public health crisis.

Drilling down, per Hagland’s report, “Chavez is able to boast that 99 percent of the patients living in San Diego and next-door Imperial Counties have their patient records entered into San Diego Health Connect’s core data repository, which is facilitating 20 million messages a month, encompassing everything from ADT alerts to full C-CDA (consolidated clinical documentation architecture) transfer.”

According to Chavez, “With regard to hep A, we’ve done a wonderful job with public health reporting. I venture to say that in every one of those cases, that information was passed back and forth through the HIE, all automated, with no human intervention. As soon as we had any information through a diagnosis, we registered the case with public health, with no human intervention whatsoever. And people have no idea how important the HIE is, in that. What would that outbreak be, without HIE?”

To this point, Yphantides adds that to him, the hepatitis A crisis was actually not as much about an infectious outbreak as much as it was “inadequate access, the hygienic environment, and not having a roof over your head.” Chavez would certainly agree with Yphantides, as he noted in Hagland’s 2017 article, “We’re going through a hepatitis A outbreak, and we’re coming together to solve that. We have the fourth-largest homeless population in the U.S.—about 10,000 people—and this [crisis] is largely a result of that. We’re working hard on homelessness, and this involves the entire community.”

Indeed, while administering tens of thousands of hepatitis A vaccines—which are 90 percent effective at preventing infection—turned out to be a crucial factor in stopping the outbreak, there were plenty of other steps taken by public health officials related to the challenges described above. Per a February report in the San Diego Union-Tribune, some of these actions included “installing hand-washing stations and portable toilets in locations where the homeless congregate and regularly washing city sidewalks with a bleach solution to help make conditions more sanitary for those living on the streets.” What’s more, Family Health Centers of San Diego employees “often accompanied other workers out into the field and even used gift cards, at one point, to persuade people to get vaccinated,” according to the Union-Tribune report.

Yphantides notes that the crisis required coordinated efforts between the state, the city, and various other municipalities, crediting San Diego County for its innovative outreach efforts which he calls the “Uberization of public health,” where instead of expecting people to come to healthcare facilities, “we would come to them.” He adds that “hep A is so easily transmissible, and it would have been convenient to say that it’s a homeless issue, but based on how easily it is transmitted, it could have become a broader general population factor for us.”

Other Regional Considerations

Beyond the problem of homelessness in San Diego, which Jennifer Tuteur, M.D., the county’s deputy chief medical officer, medical care services division, attributes to an array of factors, some unique to the region, and others not: from the warm year-round weather; to the many different people who live in vastly different areas, ranging from tents to canyons to beaches and elsewhere; and to the urbanization of downtown and the building of new stadiums; there are plenty of other market challenges that healthcare leaders must find innovative solutions to.

For instance, says Yphantides, relative to some parts of the U.S., although California has made great strides in expanding insurance coverage, due to the Affordable Care Act—which lowered the state’s uninsured rate to between 5 and 7 percent—there are still core challenges in regard to access. “We’re still dealing with a fragmented system; like many parts of the U.S, we are siloed and not an optimally coordinated system, especially when it comes to ongoing challenges related to behavioral health,” he says, specifically noting issues around data sharing, the disparity of platforms, a lack of clarity from a policy perspective, and guidance on patient consent.

To this end, San Diego County leaders are looking to bridge the gap between those siloes while also looking to bridge the gap between the healthcare delivery system, having realized how important the broader ecosystem is, Yphantides adds. “But what does that look like in terms of integrating the social determinants of health? Who will be financing it, and who will be responsible for it? You have a tremendous number of payers who all have a slice of the pie,” he says.

Speaking more to the behavioral health challenges in the region, Yphantides says there are “real issues related to both psychiatric and substance abuse.” And perhaps somewhat unique to California, due to the cost of living, “we have tremendous challenges in relation to the workforce. So being able to find adequate behavioral health specialists at all levels—not just psychiatrists—is a big issue.”

What’s more, while Yphantides acknowledges that every state probably has a similar gripe, when looking at state reimbursement rates for MediCal, the state’s Medicaid program, California ranks somewhere between 48th and 50th in terms of compensation for Medicaid care. Put all together, given the challenges related to Medicaid compensation, policy, data sharing, workforce and cost of living issues, “it all adds up with access challenges that are less than ideal,” he attests.

In the end, those interviewed for this story all attest that one of the unique regional characteristics that separates San Diego from many other regions is the constant desire to collaborate, both at an individual level and an inter-organization level. Tuteur offers that San Diego residents will often change jobs or positions, but are not very likely to leave the city outright. “That means that a lot of us have worked together, and as new people come in, that’s another thing that builds our collaboration. I may have worn [a few different] hats, but that commitment to serving the community no matter what hat we wear couldn’t be stated enough in San Diego.”

And that level of collaboration extends to the patient care organization level as well, with initiatives such as Accountable Communities for Health and Be There San Diego serving as examples of how providers on the ground—despite sometimes being in fierce competition with one another—are working to better the health of their community. “Coopetition—a hybrid being cooperation and competition—describes our environment eloquently,” says Yphantides.

Learn more about San Diego healthcare at the Southern California Health IT Summit, presented by Healthcare Informatics, slated for April 23-24, 2019 at the InterContinental San Diego.

7 Ed Tech Trends to Watch in 2018 – Campus Technology

Virtual Roundtable

7 Ed Tech Trends to Watch in 2018

What education technologies and trends will have the most impact in the coming year? We asked four higher ed IT leaders for their take.

Whenever we analyze the landscape of higher education technology, we find a range of trends in various stages of development. There are topics with real staying power, such as learning space design (which has factored into our trends list for several years). Others have evolved over time: Virtual reality made our list in 2016, then expanded to include augmented and mixed reality in 2017, and this year makes up part of a broader concept of immersive learning. And while some topics, like video, have been around for ages, new developments are putting them into a different light.

To help make sense of it all, we asked a panel of four IT leaders from institutions across the country for their thoughts. Here’s what they told us.

Our Panelists

Brian Fodrey

Assistant Dean for Facilities & Information Technology and Chief Information Officer, School of Government, The University of North Carolina at Chapel Hill

David Goodrum

Director of Academic Technology, Information Services, Oregon State University

Thomas Hoover

Chief Information Officer and Dean of the Library, University of Louisiana Monroe

Anu Vedantham

Director of Learning and Teaching Services for FAS Libraries, Harvard Library, Harvard University

1) Data-Driven Institutions

Brian Fodrey: In the age of big data, with leaders focused on making data-driven decisions, having a data and information management strategy in place in IT is no longer just a luxury, but quickly becoming a necessity.

A unified data standardization effort can make all systems and processes better and can be directly managed by assessing how data is collected, cleansed and ultimately stored. Employing a data in-information out mindset forces us to be strategic in why data is being requested, how it is solicited and the manner in which it will inform future offerings, services and/or systems enterprise-wide. Additionally, having reliable data sets also lessens the need for redundant collection points to exist at various application levels, and instead creates a more uniform and positive user experience.

Beyond the capturing and management of data, understanding and recognizing the diversity in where and how all constituents at an institution are consuming various data sets can also lead to learning more about those who value our information, utilize our services and influence how we collect data in the future.

Thomas Hoover: Data and big data have been buzzwords — rightfully so — for the last several years. Universities are making great progress when it comes to using data to help with retention and student success. However, there is still much room for improvement to take advantage of data-driven decision-making across the entire campus.

For instance, data can be used to determine if classrooms are being utilized optimally before new construction projects are kicked off. It can and should be used to determine if aging computer labs should be renewed or transformed into something that is more useful to the university. Efforts like these can not only streamline campus operations, but also ensure that we are making most of the resources we have in the service of teaching and learning.

Another area where data can be used more is GIS data. Historically, GIS data has primarily been used in the hard sciences — but that same data could be analyzed in practically any class on a college campus. Think history, political science, criminal justice, urban planning — there is so much data out there, and we can all do a better job of using it.

Volume 65 Issue 9

Report of the Ad Hoc Consultative Committee for the Selection of a Dean of the School of Dental Medicine

  • October 16, 2018
  • vol 65 issue 9
  • News

  • print
  • The Ad Hoc Consultative Committee for the Selection of a Dean of the School of Dental Medicine (SDM) was convened by President Amy Gutmann on September 26, 2017. During its four months of work, the full Committee met on nine occasions and reported its recommendations to the President and the Provost on February 1, 2018. The Committee members were:

    Chair: Antonia Villarruel, Professor and Margaret Bond Simon Dean of Nursing

    Faculty: Hydar Ali, Professor of Pathology and Director of Faculty Advancement and Diversity, SDM

    Faizan Alawi, Associate Professor of Pathology; Director, Penn Oral Pathology Services; and Associate Dean for Academic Affairs, SDM

    Kathleen Boesze-Battaglia, Professor of Biochemistry, SDM

    Eve Higginbotham, Professor of Ophthalmology and Vice Dean for Inclusion and Diversity, PSOM

    Kelly Jordan-Scuitto, Chair and Professor of Pathology (SDM) and Associate Dean for Graduate Education and Director of Biomedical Graduate Studies (PSOM)

    Bekir Karabucak, Chair and Associate Professor of Endodontics, SDM

    Eric Stoopler, Associate Professor of Oral Medicine and Director, Oral Medicine Residency Program, SDM

    Students: Sehe Han, D’18

    Bret Lesavoy, D’19

    Alumni: William Cheung, Chair of the Board of Overseers

    Martin Levin, Member of the Board of Overseers

    Ex Officio: Joann Mitchell, Senior Vice President for Institutional Affairs and Chief Diversity Officer

    The search was supported by Adam P. Michaels, Deputy Chief of Staff in the President’s Office, and Dr. Warren Ross of the executive search firm Korn Ferry.

    The Committee and its consultants conducted informational interviews and consultative meetings with individuals and groups throughout the Penn and Penn Dental Medicine communities, as well as many informal contacts, in order to better understand the scope, expectations and challenges of the Dean’s position and the opportunities facing the University in the years ahead. These consultative activities included full Committee meetings with Dean Denis Kinane and Interim Dean Designate Dana Graves and members of the Penn Dental Medicine leadership team, including the associate deans. In addition, the Chair and the Committee members held open meetings for various Penn Dental Medicine constituencies. The consultants interviewed administrators from the central administration and from Penn Dental Medicine and sought nominations from academics and practitioners across the nation and the world as well as from leaders in government, foundations, academic societies and other organizations. Finally, members of the Committee engaged in extensive networking with Penn faculty and students, as well as colleagues at other institutions. The Committee also solicited advice and nominations from Penn Dental Medicine faculty, staff and students as well as Penn Deans and faculty and staff from across the campus via email and reviewed a variety of documents about the school.

    Based upon these conversations and materials, the Committee’s charge from the President and the Provost, and the Committee’s own discussions, a comprehensive document was prepared outlining the scope of the position and the challenges a new Dean will face, as well as the qualities sought in a new Dean. The vacancy was announced (and input invited from the entire Penn community) in Almanac.

    Over the course of its four-month search process, the Committee and its consultants contacted and considered more than 230 individuals for the position. From this group, the committee evaluated an initial pool of 43 nominees and applicants and ultimately selected 10 individuals for semi-finalist interviews with the entire Committee. Based on voluntary self-identifications and other sources, we believe the initial pool of 43 contained eight women and 35 men, and five people of color. The five individuals recommended for consideration to the President included two women and were selected from this group of 10 semi-finalists.

    On March 29, 2018, President Gutmann and Provost Pritchett announced the selection of Dr. Mark Wolff as the Morton Amsterdam Dean of Penn Dental Medicine. Dr. Wolff is a celebrated teacher, globally engaged scholar and deeply experienced clinician who served as professor and chair of cariology and comprehensive care in the College of Dentistry at New York University. He assumed his office on July 1, 2018, after ratification by the Trustees at their June meeting.

    —Antonia M. Villarruel, Professor and Margaret Bond Simon Dean of Nursing; Chair, Consultative Committee on the Selection of a Dean of the School of Dental Medicine

    Nominations for University-Wide Teaching Awards: December 7

  • October 16, 2018
  • vol 65 issue 9
  • News

  • print
  • Nominations for Penn’s University-wide teaching awards are now being accepted by the Office of the Provost. Any member of the University community—past or present—may nominate a teacher for these awards. There are three awards:

    The Lindback Award for Distinguished Teaching honors eight members of the standing faculty—four in the non-health schools (Annenberg, Design, SEAS, GSE, Law, SAS, SP2, Wharton) and four in the health schools (Dental Medicine, PSOM, Nursing, Veterinary Medicine).

    The Provost’s Award for Distinguished PhD Teaching and Mentoring honors two faculty members for their teaching and mentoring of PhD students. Standing and associated faculty in any school offering the PhD are eligible for the award.

    The Provost’s Award for Teaching Excellence by Non-Standing Faculty honors two members of the associated faculty or academic support staff who teach at Penn, one in the non-health schools and one in the health schools.

    Nomination forms are available at the Teaching Awards website, https://provost.upenn.edu/education/teaching-at-penn/teaching-awards The deadline for nominations is Friday, December 7, 2018. Full nominations with complete dossiers prepared by the nominees’ department chairs are due Friday, February 1, 2019.

    Note: For the Lindback and Non-Standing Faculty awards, the health schools—Dental Medicine, Nursing, PSOM and Veterinary Medicine—have a separate nomination and selection process. Contact the relevant Dean’s Office to nominate a faculty member from one of those schools.

    There will be a reception honoring all the award winners in the spring. For information, please email provost-ed@upenn.edu or call (215) 898-7225.

    Criteria and Guidelines

    1. The Lindback and Provost’s Awards are given in recognition of distinguished teaching. “Distinguished teaching” is teaching that is intellectually demanding, unusually coherent and permanent in its effect. The distinguished teacher has the capability of changing the way in which students view the subject they are studying. The distinguished teacher provides the basis for students to look with critical and informed perception at the fundamentals of a discipline, and s/he relates that discipline to other disciplines and to the worldview of the student. The distinguished teacher is accessible to students and open to new ideas, but also expresses his/her own views with articulate and informed understanding of an academic field. The distinguished teacher is fair, free from prejudice and single-minded in the pursuit of truth.

    2. Skillful direction of dissertation students, effective supervision of student researchers, ability to organize a large course of many sections, skill in leading seminars, special talent with large classes, ability to handle discussions or structure lectures—these are all attributes of distinguished teaching, although it is unlikely that anyone will excel in all of them. At the same time, distinguished teaching means different things in different fields. While the distinguished teacher should be versatile, as much at home in large groups as in small, in beginning classes as in advanced, s/he may have skills of special importance in his/her area of specialization. The primary criteria for the Provost’s Award for Distinguished PhD Teaching and Mentoring are a record of successful doctoral student mentoring and placement, success in collaborating on doctoral committees and graduate groups and distinguished research.

    3. Since distinguished teaching is recognized and recorded in different ways, evaluation must also take several forms. It is not enough to look solely at letters of recommendation from students or to consider “objective” evaluations of particular classes in tabulated form. A faculty member’s influence extends beyond the classroom and individual classes. Nor is it enough to look only at a candidate’s most recent semester or opinions expressed immediately after a course is over; the influence of the best teachers lasts, while that of others may be great at first but lessen over time. It is not enough merely to gauge student adulation, for its basis is superficial; but neither should such feelings be discounted as unworthy of investigation. Rather, all of these factors and more should enter into the identification and assessment of distinguished teaching.

    4. The Lindback and Provost’s Awards have a symbolic importance that transcends the recognition of individual merit. They should be used to advance effective teaching by serving as reminders to the University community of the expectations for the quality of its mission.

    5. Distinguished teaching occurs in all parts of the University. Therefore, faculty members from all schools are eligible for consideration. An excellent teacher who does not receive an award in a given year may be re-nominated in some future year and receive the award then.

    6. The Lindback and Provost’s Awards may recognize faculty members with many years of distinguished service or many years of service remaining. The teaching activities for which the awards are granted must be components of the degree programs of the University of Pennsylvania.

  • News
    • NIH Director’s Awards for Seven Penn Faculty
    • SEAS Team: Naval Research Grant
    • Report of the Ad Hoc Consultative Committee for the Selection of a Dean of the School of Dental Medicine
    • Nominations for University-Wide Teaching Awards: December 7
  • Deaths
    • Bernard Carroll, Psychiatry
    • Jay Kislak, Kislak Center
  • Governance
    • From the Senate Office: Faculty Senate Executive Committee Actions
  • Honors
    • Amber Alhadeff: L’Oreal Women in Science Fellowship
    • Liang Feng: Optical Society Fellow
    • Three PSOM Faculty: Career Award for Medical Scientists
    • Deep Jariwala: Young Investigator Award
    • Vincent Reina: National Public Policy Fellowship
    • Kimberly Trout: American College of Nurse-Midwives Fellow
    • Chioma Woko: Health Policy Research Scholar
    • Two SEAS Teams: NSF RAISE EQuIP Grants
    • Teams from School of Nursing, SAS: Green Purchasing Awards
  • Research
    • Prenatal Gene Editing for Treating Congenital Disease
    • Regrowing Dental Tissue with Baby Teeth Stem Cells
    • Reducing Political Polarization on Climate Change
    • New Insights on Interprofessional Health-Care Training
  • Events
    • Diversity Lecture: Sexual Assault in America
    • Live Music at the Annenberg Center
    • BioArt and Bacteria at the Esther Klein Gallery
    • Update: October AT PENN
  • Crimes
    • Weekly Crime Reports
  • Bulletins
    • A Drug-Free Workplace
    • Penn’s Way 2019 Week One Winners and Week Three Prizes
  • Download Issue

    Deaths

    Bernard Carroll, Psychiatry

  • October 16, 2018
  • vol 65 issue 9
  • Deaths

  • print
  • Bernard J. Carroll, former professor of psychiatry in Penn’s Perelman School of Medicine, died September 10 at his home in Carmel, California, from lung cancer. He was 77.

    Dr. Carroll was born in Australia and graduated from the University of Melbourne in 1964 with degrees in psychiatry and medicine. When he was 28 years old, he developed a test called the dexamethasone suppression test, or DST, based in biology rather than Freudian theory. However the test never met widespread use because around that same time, how types of depression were classified changed and modern antidepressants hit the market, changing how studies were interpreted and shared and what new knowledge was pursued.

    A few years later, in 1971, he came to Penn as a clinical research fellow in the department of psychiatry, and he served as an assistant professor of psychiatry 1972-1973. He went on to positions at University of Michigan and Duke, where he earned emeritus status. He served as clinical director of a geriatric hospital outside Durham, North Carolina.

    Dr. Carroll is survived by his wife, Sylvia.

    Jay Kislak, Kislak Center

  • October 16, 2018
  • vol 65 issue 9
  • Deaths

  • print
  • Jay I. Kislak (W’43), real estate magnate and long-time supporter of the University of Pennsylvania, died October 3 at his home in Miami, Florida. He was 96.

    Passionate about rare books, manuscripts and historical artifacts, Mr. Kislak donated $5.5 million to Penn (Almanac September 17, 2013), a gift that was key to renovating Van Pelt-Dietrich Library’s 5th and 6th floors and created the sleek, modern Kislak Center for Special collections, Rare Books and Manuscripts, which debuted in 2012 (Almanac April 16, 2013). To date, it was the largest cash contribution from an individual donor in the Libraries’ history.

    Mr. Kislak, a native of Hoboken, New Jersey, got his first real estate license in high school. After earning an economics degree from Wharton, he served as a US Navy pilot in World War II. In the 1950s, he moved to Florida and expanded his family’s business into a privately held real estate and financial services empire.

    Mr. Kislak’s passion for rare books, manuscripts and historical artifacts began early. Starting first with books, he began to focus his collecting interests on Florida and the Americas, later turning to art and artifacts. Collaborating with his wife, Jean, he assembled widely diverse collections encompassing many interest areas. In 2004, more than 3,000 books and other objects from their collection became a gift to the nation, now known as the Jay I. Kislak Collection at the Library of Congress in Washington, DC. He also made notable donations to create Kislak Centers at the University of Miami and Miami Dade College’s Freedom Tower.

    He is survived by his wife, Jean; children, Jonathan, Philip (C’70) and Paula; stepdaughter Jennifer Rettig; grandchildren, Rebecca, Jason, Tamara, Libby (W’10) and Jane; great-grandchildren Ezra, Simon, Kayla, Julia, Stokes and Aura; and his brother, David.

    Governance

    From the Senate Office: Faculty Senate Executive Committee Actions

  • October 16, 2018
  • vol 65 issue 9
  • Governance

  • print
  • The following is published in accordance with the Faculty Senate Rules. Among other purposes, the publication of SEC actions is intended to stimulate discussion among the constituencies and their representatives. Please communicate your comments to Patrick Walsh, executive assistant to the Senate Office, either by telephone at (215) 898-6943 or by email at senate@pobox.upenn.edu

    Faculty Senate Executive Committee Actions
    Wednesday, October 10, 2018

    2018 Senate Nominating Committee. Pursuant to the Faculty Senate Rules, the members of SEC were requested to submit the name of a member of the Standing Faculty to appear on the Nominating Committee ballot.

    Update from the Office of the Provost. Provost Wendell Pritchett offered an update on a number of topics. The Provost’s Office and the Online Learning Initiative are co-hosting a summit on campus October 12, 2018, with the University of the Future Network to discuss how globalization, online learning and other changes that are transforming the university of the future. The Take Your Professor to Lunch program continues during 2018-2019, and a notice will be sent to students in the coming weeks. Benoit Dubé is focusing his initial efforts as Chief Wellness Officer on student wellness initiatives; Provost Pritchett thanked the Faculty Senate for its role in recommending the establishment of the Chief Wellness Officer position. The Penn First Plus program has been created to support first-generation and high-need students; two faculty co-directors, Camille Charles and Robert Ghrist, have been appointed to lead the effort. Several faculty development initiatives are in place to try to further the development of faculty at all levels, including the Penn Faculty Fellows program and other efforts to support both junior faculty who are working toward promotion and tenure and faculty who are in management or leadership roles. The Office of the Vice Provost for Faculty helped host the conference “Changing the National Conversation: Inclusion and Equity” held in September, which included participation by presidents and provosts from more than 100 universities; a report will be issued later this year. A conversation between the Provost and SEC members ensued.

    Human Capital Management Project Update. Vice Provost for Faculty Anita Allen and Associate Provost for Finance and Planning Mark Dingfield described progress on replacing Penn’s existing payroll and faculty management systems with new cloud-based products Workday and Interfolio, respectively. The cloud-based systems will replace current mainframe systems, which will make them more secure. Almost every constituency in the University will be impacted, and the new products will reduce inefficiencies and streamline processes for faculty hiring, recruitment, promotion, tenure, sabbatical-tracking, and more. The systems will launch on July 1, 2019, and a period of disruption during summer 2019 is expected as users acclimatize to them. It is anticipated that both products will be adopted widely across the University. Hands-on training will begin in April and will include classroom-based training sessions and online training modules. On-demand training will be available (e.g., for faculty search committees).

    Moderated Discussion. SEC members discussed which specific topics to address in an in-depth manner during the year. Several topics were identified, of which two will be selected at the next meeting.

    Honors

    Amber Alhadeff: L’Oreal Women in Science Fellowship

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • Amber Alhadeff, a postdoc researcher in the department of biology at Penn, is one of the five recipients of the L’Oréal USA 2018 For Women in Science Fellowship. The fellowships are awarded annually to female postdoctoral scientists. The $60,000 grant is given to advance her research.

    Dr. Alhadeff’s research focuses on understanding the neural circuits and molecular mechanisms that control food intake. This research will give scientists valuable insight into how to treat metabolic disease such as obesity, eating disorders and type II diabetes. The L’Oréal USA For Women in Science fellowship will provide Dr. Alhadeff funding to further her research, including support to hire two female undergraduate students. During her fellowship, Dr. Alhadeff will also serve as a mentor to local middle and high school girls with a special focus on STEM.

    Liang Feng: Optical Society Fellow

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • Liang Feng, assistant professor in the departments of materials science & engineering and electrical & systems engineering in SEAS, has been elected a fellow of the Optical Society.

    Since 1916, the scholarly society has been the “world’s leading champion for optics and photonics, uniting and educating scientists, engineers, educators, technicians and business leaders worldwide to foster and promote technical and professional development.”

    Dr. Feng joined Penn Engineering last year, among the ranks of 30 new faculty hired in a two-year span. That group includes a concentration of experts in data science and new computational techniques, areas that Dr. Feng approaches from his background in developing nanomaterials that provide unprecedented control over light.

    The Optical Society cited Dr. Feng for his “outstanding pioneering scientific contributions to the field of non-Hermitian photonics and its applications in integrated nanophotonics and optoelectronics.”

    Dr. Feng was also recently awarded an NSF grant from its Engineering Quantum Integrated Platforms program; he and fellow MSE professor Ritesh Agarwal will use it to build quantum communication devices that take advantage of chiral properties of individual photons (see article).

    Three PSOM Faculty: Career Award for Medical Scientists

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • Three Perelman School of Medicine faculty members at the University of Pennsylvania have received 2018 Burroughs Wellcome Fund Career Awards for Medical Scientists. Elizabeth Joyce Bhoj, assistant professor of pediatrics, for research on “a novel pediatric neurodegenerative disorder caused by histone 3.3 mutations: unique insights into the histone code”; Sarah Emily Henrickson, instructor of pediatrics, for “directly interrogating mechanisms of human T cell dysfunction in the setting of chronic inflammation and atopy”; and Mark Sellmyer, assistant professor of radiology, for “engineering digital logic for cell-cell interactions.”

    The Career Awards for Medical Scientists (CAMS) is a highly competitive program that provides $700,000 awards over five years to physician-scientists who are committed to an academic career, to bridge advanced postdoctoral/fellowship training and the early years of faculty service.

    Deep Jariwala: Young Investigator Award

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • The journal Nanomaterials has named Deep Jariwala, assistant professor in the department of electrical and systems engineering in Penn’s School of Engineering, the winner of its annual Young Investigator awards, as selected by the journal’s editorial board.

    Dr. Jariwala is an expert in nano- and atomic-scale devices that could have applications in information technology and renewable energy, among other fields. In giving him the award, Nanomaterials noted, “Dr. Jariwala’s impressive work combines novel nanomaterials, such as carbon nanotubes and 2D transition metal dichalcogenides, into heterostructures and electronic and optoelectronic devices. His work encompasses synthesis of nanomaterials, characterization of their electronic and optical properties, and then fabrication of them into devices, such as diodes, FETs and photodetectors.”

    Vincent Reina: National Public Policy Fellowship

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • Vincent Reina, assistant professor in the department of city and regional planning at PennDesign has been awarded a fellowship from The Association for Public Policy Analysis & Management (APPAM). The 40 for 40 Fellowships provide funding for early-career research professionals to attend APPAM’s Fall Research Conference in Washington, DC. APPAM notes that promoting the work of early-career professionals like Dr. Reina is intended to shape the future of public policy research.

    This is the second time Dr. Reina has been honored by APPAM. He earned the organization’s prestigious Dissertation Award in 2016. His dissertation, “The Impact of Mobility and Government Subsidies on Household Welfare and Rents,” examines the behavior of landlords who provide affordable housing and the formation of policies to ensure the availability of affordable housing for low income households. Dr. Reina’s research focuses on urban economics, low-income housing policy, household mobility and the role of housing in community and economic development.

    Kimberly Trout: American College of Nurse-Midwives Fellow

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • Kimberly Kovach Trout, assistant professor of women’s health in the department of family and community health and the track lead of the nurse-midwifery graduate program in Penn Nursing, has been named a fellow in the American College of Nurse-Midwives (ACNM).

    Fellowships in the ACNM are awarded to midwives who have demonstrated leadership, clinical excellence, outstanding scholarship and professional achievement and who have merited special recognition both within and outside of the midwifery profession. The fellowship’s mission is to serve the ACNM in a consultative and advisory capacity.

    Dr. Trout’s induction ceremony took place this past May during the ACNM 63rd Annual Meeting & Exhibition in Savannah, Georgia.

    Chioma Woko: Health Policy Research Scholar

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • The Robert Wood Johnson Foundation recently announced that Chioma Woko, a doctoral student in Penn’s Annenberg School for Communication, has been named to its 2018 cohort of 40 Health Policy Research Scholars.

    Designed for second-year doctoral students from underrepresented populations and disadvantaged backgrounds, the Health Policy Research Scholars program helps researchers from all fields— from economics to epidemiology—apply their work to policies that advance equity and health while building a diverse field of leaders who reflect our changing national demographics. The four- to five-year program provides participants with an annual stipend of up to $30,000.

    Ms. Woko is a health communication doctoral student studying health behaviors online. She is conducting research on what factors influence people in social networks to carry out health behaviors, such as physical activity, contraceptive use and tobacco-related behaviors.

    Focusing on Black American populations, Ms. Woko’s work is based on evidence that suggests that different demographic groups use online resources for health in different ways, which are inherently related to disparities in health literacy and access to health resources. Ultimately, she hopes that her work will inform policy development that will impact the health outcomes of all marginalized groups.

    Ms. Woko previously held a position at RTI International, where she worked on government funded research projects on food, nutrition and obesity policy. She will be advised through the program by John B. Jemmott III, Kenneth B. Clark Professor of Communication & Psychiatry and director of the Center for Health Behavior and Communication Research.

    Two SEAS Teams: NSF RAISE EQuIP Grants

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • The National Science Foundation has awarded two of eight Research Advanced by Interdisciplinary Science and Engineering (RAISE) grants to teams from Penn’s School of Engineering and Applied Science for their proposed quantum information science research. Each team will receive $750,000 over the next three years.

    The RAISE Engineering Quantum Integrated Platforms (EQuIP) grants are designed to propel advances in quantum information science, which aims to harness the inherent quantum-mechanical properties of light and matter for new technologies. The EQuIP program focuses on quantum communication, which explores how information can be condensed, sent and stored.

    A team headed by Lee Bassett, assistant professor in the department of electrical and systems engineering, will explore how individual impurity atoms in a diamond can be used as a platform for quantum communication. The goal is to develop compact, chip-scale devices that operate as small quantum computers coupled to single photons in optical fibers, which can serve as the backbone of a future quantum internet. The team includes members of the Electronic Photonic Microsystems Lab led by Firooz Aflatouni, the Skirkanich Assistant Professor of Electrical Systems and Engineering, as well as a group at Brown University. The team will also collaborate with Tim Taminiau and his group at QuTECH in Delft University of Technology in The Netherlands.

    The second Penn team is led by Ritesh Agarwal, professor of materials science and engineering, and Liang Feng, assistant professor of materials science and engineering and of electrical and systems engineering. Together with Stefan Strauf professor of physics at the Stevens Institute of Technology and an expert in quantum signal generation, they will advance quantum communication by using advanced nanophotonic technology, delivering revolutionary quantum circuits that generate and process quantum signals via a single photon. Dr. Agarwal said, “We’re not just reducing the size—we’re reducing the cost. That’s our dream, to make this technology accessible to everyone.”

    Both teams plan to integrate undergraduate and graduate students into the research and will participate in educational outreach programs to facilitate interest in quantum information science in students from preschool through 12th grade.

    Teams from School of Nursing, SAS: Green Purchasing Awards

  • October 16, 2018
  • vol 65 issue 9
  • Honors

  • print
  • Penn’s Green Purchasing Awards, presented by Penn Purchasing Services and Penn Sustainability, were announced at the annual Purchasing Services Supplier Show on September 25.

    The award program recognizes the outstanding contributions of an individual or team that significantly advance the development of sustainable purchasing practices at Penn.

    “With Penn’s dedication to environmental sustainability, it’s important to acknowledge the outstanding contributions being made at the University. We work in a decentralized purchasing environment with daily buying decisions that are made at the department level,” said Mark Mills, executive director of Penn Purchasing Services. “Given this model, it’s important to recognize our colleagues in the Schools and Centers who have embraced sustainability in their purchasing choices. Our 2018 honorees are making smart, responsible purchasing decisions and instituting new programs—many of which can be shared and repeated across the University.”

    The first 2018 award was given to the School of Nursing’s One Less campaign team. A series of green gifts were chosen for faculty and staff and distributed at the school’s annual Service and Recognition Awards event. This all-volunteer team worked on- and off-duty to design the logo and reach consensus on the choice of this year’s reusable items. Among them were small tote bags, which can remove disposable plastic bags from the waste stream, and reusable travel mugs, which can eliminate 23 pounds of waste annually per person using them. The team negotiated with the school’s café operator to provide an ongoing discount to anyone who uses those travel mugs (or any reusable cup), incentivizing members of the community to make a green purchase daily. The award recipients included Patricia Adams, Lucia DiNapoli, Olivia Duca, Joseph Gomez, Karen Keith-Ford, Theresa Lake, Holly Marrone, Seymour Sejour and Meredith Swinney.

    The second award recipient was the Furniture Reuse and Recycling team from SAS. The team created a system that strives to divert used furniture from landfills. The process begins by creating a monthly inventory of all used furniture available in the School. The inventory is then circulated for review to SAS’s building administrators and departments. Then the list is sent to the Netter Center for Community Partnerships—reaching dozens of charity partners who may be able to reuse furniture listed on the inventory. Furniture that cannot be reused within SAS or by charity partners is recycled by Revolution Recovery. Revolution Recovery is able to divert over 80% of SAS furniture that has reached the end of its useful life from landfills. In the last four quarters for which program metrics are available (FY17 Q4-FY18 Q3), SAS has diverted 20.86 tons of furniture from landfills. That is an 88.8% diversion rate overall. The honorees from the SAS team are Jonathan Burke, Carvel Camp, Floyd Emelife, Brittany Gross, Ruth Kelley, Ryshee McCoy and Isabel Sampson-Mapp.

    Both initiatives align with Penn’s Climate Action Plan 2.0, the University’s comprehensive strategic roadmap for environmental sustainability. For more information about the recipients, visit www.upenn.edu/purchasing

    Research

    Prenatal Gene Editing for Treating Congenital Disease

  • October 16, 2018
  • vol 65 issue 9
  • Research

  • print
  • For the first time, scientists performed prenatal gene editing to prevent a lethal metabolic disorder in laboratory animals, offering the potential to treat human congenital diseases before birth. Published in Nature Medicine, research from the Perelman School of Medicine at the University of Pennsylvania and the Children’s Hospital of Philadelphia (CHOP) offers proof-of-concept for prenatal use of a sophisticated, low-toxicity tool that efficiently edits DNA building blocks in disease-causing genes.

    The team reduced cholesterol levels in healthy mice treated in utero by targeting a gene that regulates those levels. They also used prenatal gene editing to improve liver function and prevent neonatal death in a subgroup of mice that had been engineered with a mutation causing the lethal liver disease hereditary tyrosinemia type 1 (HT1).

    HT1 in humans usually appears during infancy, and it is often treatable with a medicine called nitisinone and a strict diet. However, when treatments fail, patients are at risk of liver failure or liver cancer. Prenatal treatment could open a door to disease prevention for HT1 and potentially for other congenital disorders.

    “Our ultimate goal is to translate the approach used in these proof-of-concept studies to treat severe diseases diagnosed early in pregnancy,” said study co-leader William H. Peranteau, a pediatric and fetal surgeon in CHOP’s Center for Fetal Diagnosis and Treatment and assistant professor of surgery in the Perelman School of Medicine. “We hope to broaden this strategy to intervene prenatally in congenital diseases that currently have no effective treatment for most patients and result in death or severe complications in infants.”

    In this study, the scientists used base editor 3 (BE3) to form a partially active version of the CRISPR-Cas 9 tool and harnesses it as a homing device to carry an enzyme to a highly specific genetic location in the liver cells of fetal mice. The enzyme chemically modified the targeted genetic sequence, changing one type of DNA base to another. BE3 does not fully cut the DNA molecule and leave it vulnerable to unanticipated errors when the cut is repaired, as has been seen with the CRISPR-Cas9 tool.

    After birth, the mice in the study carried stable amounts of edited liver cells for up to three months after the treatment, with no evidence of unwanted, off-target editing at other DNA sites. In the subgroup of the mice bioengineered to model HT1, BE3 improved liver function and preserved survival. The BE3-treated mice were also healthier than mice receiving nitisinone, the current first-line treatment for HT1 patients. To deliver CRISPR-Cas9 and BE3, the scientists used adenovirus vectors, but they are investigating alternate delivery methods such as lipid nanoparticles, which are less likely to stimulate unwanted immune responses.

    Regrowing Dental Tissue with Baby Teeth Stem Cells

  • October 16, 2018
  • vol 65 issue 9
  • Research

  • print
  • When trauma affects an immature permanent tooth, it can hinder blood supply and root development, resulting in what is essentially a “dead” tooth. Until now, the standard of care has entailed a procedure called apexification that encourages further root development, but it does not replace the lost tissue from the injury and causes root development to proceed abnormally.

    New results from a clinical trial, jointly led by Songtao Shi of the University of Pennsylvania and Yan Jin, Kun Xuan and Bei Li of the Fourth Military Medicine University in Xi’an, China, suggest that there is a more promising path: using stem cells extracted from the patient’s baby teeth. Dr. Shi and colleagues have learned more about how these dental stem cells, called human deciduous pulp stem cells (hDPSC) work and how they could be safely employed to regrow dental tissue, known as pulp.

    The Phase 1 trial, conducted in China, enrolled 40 children who had each injured one of their permanent incisors and still had baby teeth. Thirty were assigned to hDPSC treatment and 10 to the control treatment, apexification. Those who received hDPSC treatment had tissue extracted from a healthy baby tooth. The stem cells from this pulp were allowed to reproduce in a laboratory culture, and the resulting cells were implanted into the injured tooth. Upon follow-up, the researchers found that patients who received hDPSCs had more signs than the control group of healthy root development and thicker dentin, the hard part of a tooth beneath the enamel, as well as increased blood flow. At the time the patients were initially seen, all had little sensation in the tissue of their injured teeth. A year following the procedure, only those who received hDPSCs had regained some sensation.

    While using a patient’s own stem cells reduces the chances of immune rejection, it is not possible in adult patients who have lost all of their baby teeth. Dr. Shi and colleagues are beginning to test the use of allogenic stem cells, or cells donated from another person, to regenerate dental tissue in adults. They are also hoping to secure FDA approval to conduct clinical trials using hDPSCs in the United States. Eventually, they see even broader applications of hDPSCs for treating systemic disease, such as lupus.

    Reducing Political Polarization on Climate Change

  • October 16, 2018
  • vol 65 issue 9
  • Research

  • print
  • Social media networks may offer a solution to reducing political polarization, according to new findings published in the Proceedings of the National Academy of Sciences from a team led by Damon Centola, associate professor of communication in Penn’s Annenberg School for Communication and the director of the Network Dynamics Group.

    Researchers asked 2,400 Republicans and Democrats to interpret recent climate-change data on Arctic sea-ice levels. Initially, nearly 40 percent of Republicans incorrectly interpreted the data, saying that Arctic sea-ice levels were increasing; 26 percent of Democrats made the same mistake. However, after participants interacted in anonymous social media networks—sharing opinions about the data and its meaning for future levels of Arctic sea ice—88 percent of Republicans and 86 percent of Democrats correctly analyzed it.

    Republicans and Democrats who were not permitted to interact with each other in social media networks but had several additional minutes to reflect on the climate data before updating their responses remained highly polarized and offered significantly less accurate forecasts.

    Dr. Centola, along with Penn doctoral student Douglas Guilbeault and recent Penn PhD graduate Joshua Becker, constructed an experimental social media platform to test how different kinds of social media environments would affect political polarization and group accuracy. The researchers randomly assigned participants to one of three experimental social media groups: a political-identity setup, which revealed the political affiliation of each person’s social media contacts; a political-symbols setup, in which people interacted anonymously through social networks but with party symbols of the donkey and the elephant displayed at the bottom of their screens; and a non-political setup, in which people interacted anonymously. Twenty Republicans and 20 Democrats made up each social network. Once randomized, every individual then viewed a NASA graph with climate change data as well as forecasted Arctic sea-ice levels for the year 2025. They first answered independently, and then viewed peers’ answers before revising their guesses twice more.

    “We were amazed to see how dramatically bipartisan networks could improve participants’ judgments,” said Dr. Centola. In the non-political setup, for example, polarization disappeared entirely, with more than 85 percent of participants agreeing on a future decrease in Arctic sea ice.

    “But,” Dr. Centola added, “…the improvements vanished completely with the mere suggestion of political party.”

    New Insights on Interprofessional Health-Care Training

  • October 16, 2018
  • vol 65 issue 9
  • Research

  • print
  • A recent research study led by Zvi D. Gellis, director of the Center for Mental Health & Aging and the Ann Nolan Reese Penn Aging Certificate Program at Penn’s School of Social Policy & Practice, demonstrates the positive impact of utilizing Interprofessional Education (IPE) simulation-based training to instruct health professions students in team communication.

    The federally funded study, led by Dr. Gellis and his health professions colleagues from Penn and the University of the Sciences, reports on outcomes of a simulation-based “real-world” training among a large group of health professions students comprised of medicine, nursing, chaplaincy and geriatrics social work scholars (from the Penn Aging Certificate Program), as well as University of the Sciences occupational, physical therapy and pharmacy students.

    Dr. Gellis and his research partners examined a comprehensive set of outcomes overlooked in previous work, including attitudes towards health-care teams, self-efficacy in team communication, interprofessional collaboration and satisfaction with the simulation. The research team chose a geriatrics-palliative case study because this specialty has grown significantly in the US. Interprofessional teams frequently treat older patients with prevalent and complex chronic illnesses. Following the training, team communication self-efficacy scale scores and interprofessional collaboration scores increased among the health professions students. In addition, all participants reported more positive attitudes towards working in health-care teams and reported high satisfaction scores, post-simulation.

    The study, published in the journal Gerontology & Geriatrics Education, revealed many advantages to simulation training in health-care education. Simulation training enables students to practice clinical skills in real time among peers and faculty, without jeopardizing the safety of actual patients, and it affords the opportunity to receive immediate patient feedback within a supportive learning environment. Meanwhile, faculty have the chance to lead by example by discussing the significance of interprofessional team roles, participant recruitment in simulation learning with other disciplines, and modeling positive and professional clinical team behaviors. Simulation training can improve performance and self-efficacy in real-world clinical settings, resulting in a better experience for patients and their caregivers.

    Events

    Diversity Lecture: Sexual Assault in America

  • October 16, 2018
  • vol 65 issue 9
  • Events

  • print
  • Susan B. Sorenson, professor of social policy at SP2 and executive director of the Ortner Center on Violence and Abuse in Relationships, will discuss From College Campuses to #MeToo: Sexual Assault in America on Wednesday, October 24 as part of The Diversity Lecture Series at Penn. The noon lecture at the second-floor meeting room of the Penn Bookstore is free and open to the public.

    Dr. Sorenson will discuss how views on sexual assault have changed during the past 50 years with a particular focus on the role of college campuses. The hour will be split between her talk and a conversation about what might be next.

    The Diversity Lecture Series is intended to give insight and understanding of multicultural issues and is designed to introduce an essential component of education in helping to encourage civil debate, broaden the basis for critical thought and promote cultural understanding.

    To register, visit https://tinyurl.com/ya4yh25k

    Live Music at the Annenberg Center

  • October 16, 2018
  • vol 65 issue 9
  • Events

  • print
  • The Philadelphians: Migrations That Made Our City: Philadelphia has been shaped by a long history of diverse cultures and traditions. In The Philadelphians, the Chamber Orchestra explores the populations that migrated to and influenced the city, uncovering a unique, shared identity. Audiences will experience two periods in time, a contrast of colonial-era early music with new works that look back on Philly’s history. Along with junto-style discussion groups, period performance and modern interpretations will connect the audience with those who created the cultural landscape.

    Who is Philadelphia? What can we learn from our heritage, and how will our city be changed by new waves of immigrants? Join us as we examine our ancestry through music and discover how we came to be Philadelphians.

    The 2018-2019 season’s focus is African American and English Colonial Experience with the first performance by The Chamber Orchestra of Philadelphia: Origins & Diaspora on Wednesday, October 17 at 7:30 p.m. The program will include West African musical traditions and influences in classical music.

    This will be a unique, interactive chamber music experience with members of The Chamber Orchestra of Philadelphia. Performing in the round, host Jim Cotter will provide background and insight on each work and lead conversations with the musicians between pieces. The performance concludes with a casual audience Q&A. Tickets: https://tickets.annenbergcenter.org

    The Portland Cello Project makes its Annenberg Center Debut at 8 p.m. on Saturday, October 20, performing Radiohead’s OK Computer and more. Cellos and Radiohead were meant to collide, and the results are seriously epic. Portland’s premiere alt-classical group, complete with brass, percussion and vocals, pays tribute to Radiohead with a unique spin on music from the band’s OK Computer album and other favorites. “Every piece is treated with equal sincerity and arranged not just to invoke the original but deconstruct and re-imagine its essence.” (Seattle Times) Expect an evening “where boundaries are blurred and cellos are in abundance.” (The Strad)

    Soul Songs: Inspiring Women of Klezmer will have a world premiere on Sunday, October 28 at 4 p.m.—a one-night-only special event—where 12 women will be breathing contemporary life into the centuries-old tradition of Eastern European Jewish folk music at Annenberg Center’s Zellerbach Theatre. The brainchild of fourth generation klezmer musician and concert artistic director Susan Watts, this performance was created from the world-renowned trumpeter’s concern for the future of her art and appreciation of every individual involved.

    Soul Songs is about the old and new intertwined,” said Ms. Watts, a 2015 Pew Fellow. “It is future provoking, intuitive, grass roots. Soul Songs is about these women’s musical journeys, their artistry and their discernment to use the force of adversity to their gain. It is the klezmer of today and a prelude to future possibilities for the art and the communities it nurtures.” Soul Songs will feature new compositions, written and performed by three generations of women who bring contemporary meaning to this traditional music. Major support has been provided to the Philadelphia Folklore Project by The Pew Center for Arts & Heritage.

    Tickets: https://tickets.annenbergcenter.org

    BioArt and Bacteria at the Esther Klein Gallery

  • October 16, 2018
  • vol 65 issue 9
  • Events

  • print
  • A solo exhibition by internationally acclaimed British artist Anna Dumitriu will open at the Esther Klein Gallery on Thursday, October 18. BioArt and Bacteria explores our relationship with the microbial world and the history and future of infectious diseases. An artist lecture will be held on Thursday, October 18 at 5 p.m., immediately followed by the exhibit’s opening reception 6-8 p.m. at the gallery.

    To register, visit https://sciencecenter.org/engage/bioart-and-bacteria-artist-lecture-and-opening-reception

    The exhibit runs through November 24.

    Update: October AT PENN

  • October 16, 2018
  • vol 65 issue 9
  • Events

  • print
  • Talks

    19 The History, Theory and Practice of Administrative Constitutionalism; 2018 University of Pennsylvania Law Review Symposium; 1 p.m.; Penn Law; info and to register: www.pennlawreview.com/symposiumThrough October 20.

    25From Inquiry to Innovation: How a Clinical Question Became a Business Opportunity; Kathryn Bowles, nursing; 3 p.m.; Fagin Hall; RSVP: https://tinyurl.com/y82xfnha

    Public Health vs. the Viruses: A Matchup for the Century; CPHI Seminar Series; Anne Schuchat, CDC; 3 p.m.; Rubenstein Auditorium, Smilow Center;(Center for Public Health Initiatives, Penn Dental, Prevention Research Center, Student Health Service).

    AT PENN Deadlines

    The October AT PENN is online. The November AT PENN will be published October 30. The deadline for the weekly Update is the Monday of the week prior to the issue. The deadline for the December AT PENN is November 5.

    Crimes

    Weekly Crime Reports

  • October 16, 2018
  • vol 65 issue 9
  • Crimes

  • print
  • Below are the Crimes Against Persons, Crimes Against Society and Crimes Against Property from the campus report for October 1-7, 2018. View prior weeks’ reports. —Ed.

    This summary is prepared by the Division of Public Safety and includes all criminal incidents reported and made known to the University Police Department for the dates of October 1-7, 2018. The University Police actively patrol from Market St to Baltimore and from the Schuylkill River to 43rd St in conjunction with the Philadelphia Police. In this effort to provide you with a thorough and accurate report on public safety concerns, we hope that your increased awareness will lessen the opportunity for crime. For any concerns or suggestions regarding this report, please call the Division of Public Safety at (215) 898-4482.

    10/2/18 4:02 PM 434 S 42nd St Secured bike taken

    10/2/18 5:00 PM 3900 Walnut St Confidential

    10/2/18 7:39 PM 4000 Locust Walk Complainant assaulted by offender

    10/3/18 1:17 PM 3400 Spruce St Money not deposited in bank

    10/3/18 1:23 PM 3409 Walnut St Unattended backpack and contents taken

    10/3/18 7:53 PM 4100 Walnut St Bike taken/Arrest

    10/4/18 8:22 AM 4207 Baltimore Ave Offender smashed front door window and stole tools

    10/4/18 10:20 AM 3400 Spruce St Patient’s unsecured phone stolen

    10/4/18 1:10 PM 3335 Woodland Walk Keys and cell phone not returned to owner

    10/5/18 2:30 AM 3401 Spruce St Unknown male touched complainant inappropriately

    10/5/18 12:55 PM 3800 Walnut St Secured bike taken.

    10/6/18 2:16 AM 3549 Chestnut St Altercation between boyfriend and girlfriend

    10/6/18 2:41 PM 3603 Walnut St Merchandise taken without rendering payment

    10/6/18 7:31 PM 4039 Chestnut St Items removed from packages

    10/6/18 7:38 PM 3631 Walnut St Phone taken from display

    10/7/18 9:17 PM 3000 Chestnut St Complainant assaulted by partner/Arrest

    18th District

    Below are the Crimes Against Persons from the 18th District: 9 incidents (1 robbery, 1 assault, 1 indecent assault, 2 aggravated assaults and 4 domestic assaults) were reported October 1-7, 2018 by the 18th District covering the Schuylkill River to 49th Street & Market Street to Woodland Avenue.

    10/1/18 8:51 PM 4806 Market St Robbery

    10/2/18 5:00 PM 3900 Walnut St Indecent Assault

    10/2/18 7:59 PM 40th & Locust Sts Assault

    10/3/18 6:21 PM 4901 Chestnut St Aggravated Assault

    10/3/18 9:44 PM 47th & Springfield Ave Domestic Assault

    10/5/18 6:23 PM 4500 Baltimore Ave Domestic Assault

    10/6/18 2:52 AM 3549 Chestnut St Domestic Assault

    10/6/18 11:11 AM 48th & Spruce Sts Aggravated Assault

    10/7/18 9:18 PM 30th & Chestnut Sts Domestic Assault

    Bulletins

    A Drug-Free Workplace

  • October 16, 2018
  • vol 65 issue 9
  • Bulletins

  • print
  • The University of Pennsylvania is committed to maintaining a drug-free workplace for the health and safety of the entire Penn community. Drug and alcohol abuse can harm not only the users but also their family, friends and coworkers. As Penn observes National Drug-Free Work Week, please take the time to review the University’s drug and alcohol policies.

    Penn’s Drug and Alcohol Policies

    Penn prohibits the unlawful manufacture, distribution, dispensation, sale, possession or use of any drug by its employees in its workplace. Complete policy details are available online:

    Drug-Free Workplace Policy: https://www.hr.upenn.edu/policies-and-procedures/policy-manual/performance-and-discipline/drug-free-workplace

    The University Alcohol and Drug Policy:https://catalog.upenn.edu/pennbook/alcohol-drug-policy

    Understanding Addiction

    Addiction is a serious disease, but many effective treatments are available. Visit the Health Advocate at http://www.healthadvocate.com/upenn for facts about addiction, recovery and support services.

    Help Is Here

    If you or a family member has a substance abuse problem, we encourage you to seek help. Penn provides free, confidential counseling services for you and your immediate family members through the Employee Assistance Program (EAP). The EAP will assist you with challenges that may interfere with your personal or professional life, including substance abuse.

    For more information about the EAP’s counseling and referral services, visit the Employment Assistance Program web page at https://www.hr.upenn.edu/eap or contact the Employee Assistance Program 24 hours a day, 7 days a week at (866) 799-2329.

    You can also refer to Penn’s addiction treatment publication for information about treatment benefits and resources at https://www.hr.upenn.edu/docs/default-source/benefits/opioid-brochure.pdf

    Penn’s Way 2019 Week One Winners and Week Three Prizes

  • October 16, 2018
  • vol 65 issue 9
  • Bulletins

  • print
  • Penn’s Way 2019 Raffle Prize Listing Week One Winners

    Office Depot: Supply Basket ($100); Kara Eller, HUP

    Philip Rosenau Co., Inc.: Walmart gift card ($50); Orjana Kurti, CPUP

    Fisher Scientific: Home Depot gift card ($50); Susan Sorenson, SP2

    Fisher Scientific: Lowe’s gift card ($50); Geoffrey Filinuk, ISC

    Specialty Underwriters LLC: Amazon gift card ($100); Shynita Price, UPHS Corporate

    Philadelphia Eagles: Carson Wentz autographed 8×10 photo ($50); Joanne DeLuca, CPUP

    Week Three Drawing: October 22, 2018

    Visit www.upenn.edu/pennsway for more information about the raffle and making a pledge. Entries must be received by 5 p.m. on the prior Friday for inclusion in a given week’s drawing. Note: List is subject to change.

    Sponsor: prize (value)

    Philip Rosenau Co., Inc.: Walmart gift card ($50)

    Fisher Scientific: ExxonMobil gift card ($50)

    Fisher Scientific: Old Navy gift card ($50)

    Philadelphia Eagles: Chris Long autographed Super Bowl LII mini helmet ($30)

    Starr Restaurants: Parliament Coffee Bar gift bag ($75)

    Gift Baskets for Thought: Penn-Themed gift basket ($75)

    Philadelphia Flyers: Signed memorabilia ($35)

    Previous Issue

    Almanac is the official weekly journal of record, opinion and news for the University of Pennsylvania community.

    About Almanac

    © 1954-2018 The University of Pennsylvania Almanac.

    3910 Chestnut St., 2nd Floor, Philadelphia, PA 19104-3111

  • E-mail: almanac@upenn.edu
  • Phone: (215) 898-5274
  • Manage Subscription