Fundamental Economic Database Construction Part 1

In a previous post a while ago, I wrote down the high-level approach for how one would create a basic algorithmic trading strategy. The work below represents one of the inputs to that overall system, the economic events and news.

Once an investor begins doing enough trading, they start to rely on global events and the news. One has to read articles, and get a sense of the market. You try to pull together the articles and create the relationships between everything. A lot of what is being made sense of is what traders refer to as the “fundamentals.”

The fundamentals in the macroeconomic sense refer to each country or regional economic bloc’s economic health. Such measures include but are not limited to growth, inflation, unemployment, and account balances. This data can be aggregated into a relational database.

What an economic database accomplishes is to enable each investor to have a single repository of economic information available on hand to view how each economy is performing over time in an unbiased and data driven view. In seeing which countries are performing well and poorly, investors can spot divergences in performance and thus partake in opportunities or the avoidance of risk.

I took the approach of seeking out 5 categories of economic information: growth, consumer health, inflation, unemployment, and industrial activity. There are of course, still more categories which are not covered such as purchasing power, fiscal policies and account balances. My next goal was to identify the economies to be examined and the source of that data. I take a top down approach, so I decided to look at the United States, China, Europe, and Japan as they are the four largest economies. Finally, the reports below are the ones I decided to use for data and the category which I bundled each report into for later use. The country codes use currency pair nomenclature as their key.

The next post will outline how I used the reports together for advanced calculations and data manipulation.

Inflation Consumer Price Index (MoM) CNY
Growth Gross Domestic Product (YoY) CNY
Industrial Activity NBS Manufacturing PMI CNY
Growth New Loans CNY
Industrial Activity Non-Manufacturing PMI CNY
Inflation Producer Price Index (YoY) CNY
Inflation Consumer Price Index (YoY) EUR
Growth Gross Domestic Product s.a. (YoY) EUR
Industrial Activity Markit Manufacturing PMI EUR
Consumer Health Retail Sales (YoY) EUR
Unemployment Unemployment Change [Ger] EUR
Unemployment Unemployment Rate EUR
Consumer Health ZEW Survey – Economic Sentiment EUR
Growth All Industry Activity Index (MoM) JPY
Consumer Health Consumer Confidence Index JPY
Inflation National Consumer Price Index (YoY) JPY
Growth Gross Domestic Product (QoQ) JPY
Housing Housing Starts (YoY) JPY
Industrial Activity Machine Tool Orders (YoY) JPY
Consumer Health Retail Trade (YoY) JPY
Growth Trade Balance – BOP Basis JPY
Unemployment Unemployment Rate JPY
Consumer Health Reuters/Michigan Consumer Sentiment Index USD
Inflation Consumer Price Index Ex Food & Energy (YoY) USD
Industrial Activity Durable Goods Orders USD
Housing Existing Home Sales Change (MoM) USD
Growth Gross Domestic Product Annualized USD
Housing New Home Sales (MoM) USD
Unemployment Nonfarm Payrolls USD
Industrial Activity ISM Non-Manufacturing PMI USD
Consumer Health Retail Sales (MoM) USD
Unemployment Unemployment Rate USD
Consumer Health Retail Sales (YoY) CNY

 

Create a tabbed icon menu from a SharePoint library using JQuery and SP Services

Recently I was asked to create a familiar tabbed menu but the client also wanted functionality so that the menu could be easily configurable by the end users. To manage this, the solution uses a document library to host the images and a JQuery script to return the images as HTML for rendering out in the browser. Grab a cup of coffee and block out an hour or two and see how you can implement this on an O365 site.

  1. Download this Jquery UI and all the scripts with it. http://jqueryui.com/tabs/. Use the View Source and copy the HTML for use later.
  2. Use the Jquery ThemeRoller: http://jqueryui.com/themeroller/ to generate the desired CSS of your menu layout. Download and save the CSS to your site’s style library for later reference. Note that in SP 2013/O365 it does not appear that you can call external CSS files any longer. They must go in the <style> tags within your web part’s code.
  3. Create a custom document library by going to the site settings gear icon and selecting “Add an App.”
  4. Create two custom columns of ImageSrc and URL as Single Line of Text types. The purpose of these two columns is to derive the HTML which we will use for rendering the items out in the browser.
    1. Img src is the file path where the image is hosted
    2. URL is the href target you wish the icon to go to
  5. Create another custom column called tabs which is a choice column and each choice is constrained by the values of your tabs. For my example I have three tabs for types of dog breeds (toy, working, and hunting). So, a Yorkie icon will have the tab=”toy”
  6. Create calculated column, which I called HTML. Due to the amount of quotation marks (“) in the formula, you need to rework how you would normally write the formula by doing it like so:
    • =CONCATENATE(“<a href=”,CHAR(34),URL,CHAR(34),”><img src=”,CHAR(34),ImageSrc,CHAR(34),”></a>”).
    • Doing it like the above avoids problems with the quotation marks by using CHAR(34)
  7.  We now have the HTML for our items which will give output the image and make it as a hyperlink. You could of course, design a different HTML string with more information such as a class or ID for additional styling and behaviorsNow take your icon set images and upload them into the library. Add in your img src and URL metadata to each image item. You will see that the HTML column will be formed for you from our calculation logic.
  8. Next visit the SP Services web site http://spservices.codeplex.com/ and download the latest versions of the .js file libraries and upload them to a document library on your site, such as Site Assets.
  9. Next, you will insert the Insert Tabs html from step 1 into a script editor web part on a web part page. Within that html you will also input code to retrieve the external JavaScript files, as well as the API hosted Jquery. Finally, you will have a line to an additional JS file that we have not yet made. This JS file will execute our CAML queries and get our list item properties to be rendered out into HTML. I called mine icons.js. Here is an example of the code.

<script src=”//code.jquery.com/jquery-1.10.2.js”></script>
<script src=”//code.jquery.com/ui/1.11.2/jquery-ui.js”></script>
<script language=”javascript” type=”text/javascript” src=”https://[domain]/[site path]/SiteAssets/jquery.SPServices-2014.01.js”></script>
<script language=”javascript” type=”text/javascript” src=”https://[domain]/[site path]/SiteAssets/jquery.SPServices-2014.01.min.js”></script>
<script src=”[domain]/[site path]/SiteAssets/icons.js”></script>

<style>
<!– Insert your theme roller CSS here –>
</style>

<!– Insert the HTML from the Jquery tabs ui below –>
<div id=”tabs”>
<ul>
<li><a href=”#tabs-1″>Toy Breeds</a></li>
<li><a href=”#tabs-2″>Working Dogs</a></li>
<li><a href=”#tabs-3″>Hunting Dogs</a></li>
</ul>

<div id=”tabs-1″>
<p id=”mytab1″></p>
</div>

<div id=”tabs-2″>
<p id=”mytab2″></p>
</div>

<div id=”tabs-3″>
<p id=”mytab3″></p>
</div>
</div>

  1. Next you will download the CAML Designer tool
    1. Establish the connection to your site using your typical domain credentials.
    2. Use CAML Designer Tool to write the CAML queries, you will see your desired CAML markup in the lower right.
    3. The final query should be querying by the tab value to show something like this: <Query><Where><Eq><FieldRef Name=’Tab’ /><Value Type=’Choice’>Toy</Value></Eq></Where></Query>
  2. Now we will create the icons.JS file which houses the GetListItems function which will render our items into HTML in the tabs. Refer to the GetListItems documentation http://spservices.codeplex.com/wikipage?title=GetListItems for all the methods and attributes you can use. Use the output from the CAML Designer tool to place in the function’s CAMLQuery attribute. The var Html is each query result’s HTML column property, if you recall the HTML column was our calculated column with the img src and URL. My output had text of ‘string#;’ between each icon, so I had to use an expression to clear that and only return the clean HTML. Then the script uses the Jquery append function to place that HTML in our HTML DOM element with an id matching ‘mytab1′. You may uncomment the     //alert(xData.responseText); if you wish to troubleshoot your query.

 

$(function() {
$( “#tabs” ).tabs().addClass( “ui-tabs-vertical ui-helper-clearfix” );
$( “#tabs li” ).removeClass( “ui-corner-top” ).addClass( “ui-corner-left” );
});

 

 

//Start Get list items
$(document).ready(function() {
$().SPServices({
operation: “GetListItems”,
// Force sync so that we have the right values for the child column onchange trigger
async: false,
webURL: “https://[domain]/[site]/“,
listName: “Accordian Navigation Icons”,
CAMLViewFields: “<ViewFields><FieldRef Name=’HTML’ /></ViewFields>”,
CAMLQuery: “<Query><Where><Eq><FieldRef Name=’Tab’ /><Value Type=’Choice’>Toy</Value></Eq></Where></Query>”,
CAMLRowLimit: “10”,
completefunc: function (xData, status) {
$(xData.responseXML).SPFilterNode(“z:row”).each(function() {
var Html =  $(this).attr(“ows_HTML”);
var cleanHtml = Html.split(“string;#”);
$(“#mytab1″).append(cleanHtml[1]);
//alert(xData.responseText);
});
}
});
});

  1. By now when you visit your web part page you will see your SP list content and the images hosted in your document library appearing in their appropriate tabs. Style your HTML either inline or in the <style> head section. Note that in 2013 calling an external CSS file does not work.

SharePoint 2003 to 2007 migration lessons learned

Recently I was on a project where I was the technical lead in migrating SharePoint content from 2003 to 2007. This effort was unique as we had several custom scripts, applications, master pages and styles. In total there were about 180 sites that needed to be moved. Numerous problems arose throughout the process, some technical in nature, others of either politics or shortcomings in understanding technology. Here are several takeaways for those attempting content migrations:

 

  1. Use SCRUM, or an agile methodology. This project had several components such as network engineering /DNS switch, a Metalogix move, custom .net development, site re-branding, and custom application development. The business owners could not quite see how some of these pieces were not dependencies, but instead were isolated components which could be completed independently. An agile methodology which uses iterative development is necessary in a project such as this simply because of the amount of risk incurred with so many moving pieces. SCRUM allows practitioners to move items to production quickly, and without bottlenecks. This method also increases project profitability as explained below.
  2. Point 2. SCRUM maximizes the time value of money in a project management setting. In finance the time value of money basically states that money is worth more now than in the future because you can put it to work immediately. Now if your projects deliverables/ product backlog is money, then the quicker they are used, the more value they have then if deployed in the future. Take for example a search feature, the quicker people use it, the more value it starts delivering over time and the return on investment comes that much faster. This relates to this project because the methodology behind the project was that everything had to be released at once, resulting in many product backlog items which could have been released sooner and thus, had a higher time value of money. Explain this to finance professional and they will state that SCRUM is financially superior for project ROI, as each product iteration brings value faster.
  3. Manage risk. SharePoint is a very complex .Net application product. Looking in the master page and at the associated assemblies housed in the GAC shows the amount of interdependencies going on in the code itself. Start throwing in customizations, users, permissions, information architecture and it will take a heavy amount of documentation and precise methods to sync everything up and have all the pieces work in concert. This adds a bit of project risk, which stands to lower quality by lengthening out deadlines for deliverables. Why not lower the risk by moving in increments?
  4. Use methodologies. If you are attempting a complex content migration then the business stakeholders must be made aware of methodologies which help with such efforts. Not only must they be made aware, but they should be shown various ways in which these formalized methodologies help in reducing risk, lowering cost, and improving implementation and development times. In this project, one example of a methodology which was ignored was ITIL’s concept of content freezes. Ignoring this methodology caused 3 days of a content freeze during non-peak hours resulting in an unknown amount of headaches for users. Always communicate the importance of following industry best practices when conducting change management procedures.
  5.  Avoid customizing a SharePoint if possible. This is the most important technical point of this post. The customizations contained in the SharePoint resulted in numerous points of increased time required. In one example, some branded pages lost the breadcrumb trail and it required a reversion to old content to keep such functionality. All of these was only discovered after an interaction with Microsoft. Not only that, but for long term use, any customizations will need to be documented so that future developers understand what is going on with the code and all the dependencies affecting each other. If you have interacted with developers much, you know that this is unlikely and will require a BA with a high attention to detail and interfacing with developers. Or, take the path of less resistance and simply do not customize the SharePoint.

 

The above points do not cover all lessons learned in the SharePoint migration. However, hopefully they provide enough key points that for your next migration effort or SharePoint project, it goes a bit smoother and faster than the project here.

Changing the default search control delegate on SharePoint 2007 MOSS

Modifying SharePoint as always, is something that never seems to come quite easily. Today in re-branding a 2007 page I attempted to change the default box which came on the page:

<SharePoint:DelegateControl runat=”server” ControlId=”SmallSearchInputBox”/>

However, the delegate controls cannot be easily modified. Then I found this great post outlining on how to instead add an assembly in Chris O’Connors blog: http://sharepointroot.com/2011/05/25/replace-sharepoint-delegate-control-smallsearchinputbox/. The only problem is that when I attempt to add the assembly as outlined the errors multiply.

This leads to the assembly not able to be loaded. (Unable to load file or assembly, version 14…..)

<%@ Register Tagprefix=”SharePointWebControls” Namespace=”Microsoft.SharePoint.Portal.WebControls” Assembly=”Microsoft.Office.Server.Search, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c” %>

The “fixes” are disparate on stack overflow and I think hard to find since these errors can be triggered by even the use of a 64 bit machine when using 3rd party software and more. My feeling is that the assembly must not be loaded or referencing properly from the server. The problem is complicated when as developers we cannot troubleshoot or configure the central administration console in our environment.

All this leads back to a larger lesson within SharePoint managed environments (probably applying to other companies). Project Managers should expect customizations to take time and always give padding. In addition, business analysts should have been developing requirements along the way and in these all control delegates requiring changes, as mentioned above, should have been put into a traceability matrix. The problem described here could ideally be stored in the requirements repository and documented so that future attempts would have a documented resolution. If that was the case PMs could know this requirement will take extra time, or even better the repository could serve future PMs by offering a pre-defined solution.

Algorithmic Trading Strategy

Please take a moment to observe the graphic for a head start on the post

master system

This post will focus on the development of what is becoming my first attempt at a truly algorithmic trading strategy. The below system is more than one strategy, but rather a “strategy of strategies,” or more correctly, algorithm of algorithms. Most systems which I have found, while reliable, lack the integration of volume, open interest, economic events and the news. This system seeks to integrate those things and then adaptively uses aggressive or passive strategies depending on market conditions. Many of what is described here has been developed for Forex, but would easily apply to commodities and leveraged ETFs. This post will not go into the actual coding or software development of the algorithms, but covers the fundmanental reasoning for how the system was conceived. This next section covers some of the inputs into our functions of the overall system.

 

SECTION 1: Inputs

Input 1 News

The first step of the system is a news aggreagetion system. For selected currencies or symbols, various RSS feeds can be consumed. These feeds can be aggregated into XML format. From the aggregation of the XML data the nodes are able to be indexed according to strings containing specific keywords. These keywords are then counted to create a total sentiment ratio for that time period pertaining to specific asset classes. By putting in these values over time periods we then have a time series analysis of news relating to an asset class. While technical price structure does also drive pricing, so does the news. We’ll revisit this piece later to see how the other algorithms use it when interacting with the market. This news system is closely related to economic events. The news can also reveal if a risk on or risk off sentiment is forming.

 

Input 2 Economic Events

In the same way we collect the news items, we can collect data on upcoming economic events. The news items will have shown us a herding mentality as people build up a consensus (which may be incorrect). With the indexing of the news items into a sentiment indicator, and the actual release of the news, the greater the two are apart, the more impact we can expect from volatility spikes after. On the other chance that the news indicators were correct, that would imply the asset had already correctly priced in the news events. This can be observed in corporate earnings reports up to a week before. Because economic events may cause large movements in unfavorable directions the master algorithm closes out any open position and takes the profit before such events, while setting up a new straddle strategy to take advantage of the news outbreak.

 

Input 3 Interest Rates and Open Interest

The next pieces that the system will compile will be interest rates and open interest. Interest rates show which currencies pay the most. This is obviously important when doing passive investing for carry trades. We “carry” the currencies which give us the most and reduce the size of one’s which pay less. These computations taken and stored in a database for use in a time series analysis, allowing for a computation into the carry trade. Open interest is collected via a query to collect the data. This data will be used as inputs by our trend systems.

 

Input 4 Clustering

Clustering refers to the associations of items within the data set. Such clusters would include supply chain components, regional trading blocs, or senior executives and their firms. For the purpose of this Forex system, clustering would measure the relationships between currencies so that we can know how far “ripple” effects will be when economic news and volatility affect a certain currency. The latest example is how the dollar of Singapore is being devalued after the devaluation of the Yen. The clustering then defines these networks when analyzing overall system impact from changes.

 

Input 5 VIX, SPY, DXY and Treasuries

The fifth input the algorithm uses is from the Options volatility or “fear index” to gauge volatility. This input can help the system to start seeing impending changes with the news index quite easily. We can also use the implied volatility of futures contracts on the VIX to determine the range of prices over the next few months. Because implied volatility is a spread around the current price level the system can use the news sentiment index to point at the final direction for asset pricing. The SPY can also be measured to see how much fear exists, since a flight to stocks is typically not done during market crashes. The final fear measure collected is the Dollar Index and Open Interest contracts for short term treasuries. Both of these measures move against a risk off attitude. Keeping these four items within a dataset also allows us to keep continual monitoring of correlation coefficients between these variables.

 

SECTION 2: Computation

Because this post is only general in overview nature devoid of specific rules or execution points the technical details of the computation are not covered. In a general sense, the above inputs feed into functions. These functions help us to determine which strategies to use or avoid. This computation can also help filter out incorrect signaling from the trading system themselves when they call for a new position. Here are a few examples of why the computation is needed above and beyond just a single use system even though such a system may produce excess returns over time.

 

For example, even a winning trending system may win over time but since markets do not trend 70% of the time that represents an enormous opportunity cost. During these periods our algorithm of algorithms “switches” to find the most opportunistic trading systems by calculating the inputs and the technical indicators of the prices. Using our inputs we can gauge when we should be passively investing, if we should be trend following, scalping responding to news and economic events or range trading. Because the system uses live feeds and can judge correlation coefficients, profitability can be drawn so that systems deactivate upon outside triggers. We can even find the covariance of return of the system volatility between two trading strategies. This allows us to find the amount of relationship to our system’s beta and then change the weighting of our individual trading strategies (the covariants).

 

SECTION 3: Strategies

This section will cover the strategies used in a broad sense but won’t drill down into the exact entry and exit points. The strategies cover most of the conditions which occur at various times in the market. That is why the inputs and the computation are helpful in timing when the strategies should be deployed.

 

For our trend strategies which perform best during periods of volatility, the two selected are a modified breakout system and the Ichimoku system. While each is a trend system, each of them may receive different signals at points in time. The position sizes are affected by volatility and open interest. Our inputs from section one such helps us determine if a trend is in fact legitimately occurring. Similarly step one can help inform us to get in a trend position earlier than the system signal.

 

The scalp strategy is a high frequency one minute trade strategy which uses technical indicators. This strategy is the least affected by our inputs because the scalp size needs very little volatility. It also is less profitable then a trend strategy. The scalp system takes advantage of low volatility calm periods in a currency pair and then places a buy when movement occurs. It uses mean reversion to take profits.

 

The range trade strategy uses ranging conditions where prices generally move sideways. Rather than buying when prices exceed the range of movement the strategy places a sell order expecting a reversion to the mean, or average. Note that not only does the covariant beta determine if this should be used and to what extent, but also our clustering dataset. Items ranging in one cluster will cause connected clusters to possibly start ranging as well.

 

The carry trade is the most passive of all the strategies involved and carries the least transaction costs. This strategy buys and sells currencies based upon the rate of interest from the currencies. This in turn should affect a trend toward certain currency pairs. However, risk on and risk off can negate this during flights to safety toward even negative yielding instruments, as happened during the financial crash of 2008.

 

The news event trade acts to actively trade the news by assuming positions before a news or economic event occurs. However, the news aggregator influences this as mentioned earlier by measuring the predicted reaction and the actual event followed by the associated volatility.

 

CONCLUSION

While this post does not contain the technical rules it does show some of the building blocks of an adaptive algorithmic system. Future posts will lay down the ground work for our logic, associations, data architecture, modeling planning and requirements. Each of our “blocks” is essentially an application and dataset within itself, upon which we must make and create a level of interoperability to allow all systems to speak and interact with each other.

 

Market Analysis of the Electronic Health Record Market Using Porter’s 5 Forces Model (Part 1)

Executive Summary

The Electronic Health Record (EHR) market is currently best characterized by two forces: technology and incentives. EHR vendors are software vendors. This means that their products rapidly evolve and exist in a larger environment which is rapidly evolving.  Investment and strategic decisions therefore need to most consider the healthcare and technology sector strength or weakness. According to Frost & Sullivan, the “core hospital EHR market is considered to be mature and dominated by a handful of well-established, relatively entrenched vendors” yet it is also dynamic due to “increasing provider consolidation, improper product price points, poor usability, and uncertainties regarding the financial and logistical fallouts of healthcare reforms present new opportunities (and risks) for both existing vendors as well as entrants with niche products or service” (Eder, n.d.).

In addition to being a mature market, the market is also living on a deadline. The Health Information Technology for Economic and Clinical Health (HITECH) act contained within the American Recovery and Reinvestment Act (ARRA) provides $20 Billion in incentive payments for “meaningful use” criteria (Lamont, 2010). However, these incentive payments cover a 5 year period and will expire by 2015 (Singh & Sawhney, 2006). This means that providers must adopt EHRs with a sense of urgency. A Mature market and funding window of opportunity for government subsidized growth are characteristic to the industry.

Industry Overview

Revenue Size

            The incentives and growth in the ARRA and HITECH acts are driving substantial growth in the EHR field. Estimates vary, but some analysts state that in 2013 the revenue will reach $3 billion (Lewis, n.d.). This report conflicts with other reports of $6.5 billion in 2012 (Eder, n.d.). Available research therefore places this figure at $4.75 Bn using an average. This would represent a total for the entire market, of which there are individual, ambulatory, emergency, ePrescribing and long term post acute care. The largest segment is ambulatory (“CCHIT Certified products,” n.d.).  The market is further segmented by examining the types of practices: family, clinic, large-scale hospital, billing and hospital systems (Swab & Ciotti, 2010). Then integrating these are ancillary software applications for practice management, connectivity and billing. A products niche would be determined by cross referencing these two areas.

   
Setting / Size Ambulatory Emergency ePrescription Long term  

Inpatient

 
Individual (PHR)            
family            
clinic            
large scale            

Product Matrix

Employment

            The size of the total EHR revenue stream is directly dependent upon the number of physicians and hospitals which utilize the service. Most current estimates show that 225,000 physicians are using some form of EHR, with over 300 vendors, 97 of which are CCHIT certified (Thorman, n.d.). Other analysts see a “volatile and highly fragmented [market] … served by more than 300 vendors supplying a variety of basic to advanced EHRs to approximately 261,000 physicians, or 44% of physicians in an ambulatory practice for 2009” (Lewis, n.d.).   Since 4 vendors account for over 75% of the market share, the total of these employees is 10,580. It is very difficult to state with any accuracy the size of the rest of the industry. If one were to take the sample we have and apply it to the rest of the industry, the size would then be 258,125. The difficulty in obtaining the true employment figure lies in the fact that some EHR firms are subsidiaries of such companies as General Electric (GE), which makes GE Centricity. These companies are less correlated to the moves of the EHR market and have a boader market exposure.

Competitors & Industry Leaders

Literature shows Epic (Privately owned), AllScripts (MDRX), and NextGen, owned by Quality Systems, Inc. (QSII) , and Cerner (CERN), as the largest four vendors (Sittig et al., 2011).  Other literature found eClinicalWorks (Privately owned) to be in the third place, thus showing how market data is disparate. Practice Fusion (Privately owned) is also rapidly growing in size because it is free, their website reports 100,000 users. The financial and technical analysis of these companies will show that these companies, Cerner particularly, represent the largest players.

Industry Drivers

Porter’s Five Forces Framework

            The conceptual framework for this research was done by examining the industry under Porter’s five forces competition theory. Taken into account for each firm were the underlying industry themes of government incentive payments ending in 2015 and market maturity. The Five Forces is a desirable analytical model because it can determine investment profitability by deconstructing industry structure (Porter, 1980).

Entry threats. This research finds the threat of entry low and ranks of low profitability in the first structural determinant category. New entrants will be hard pressed to find a market niche as vendors are seemingly entrenched. There is little room for product differentiation, there are high buyer switching costs and existing players have large economies of scale at work. Another vital entry barrier is also obtaining meaningful use criteria.

Frost & Sullivan find that only 2% of EHR systems currently qualify for the meaningful use clause (Eder, n.d.).  Meaningful use by the law means that providers need to show that they are using the technology in ways which can both be qualitatively and quantitatively measured. All of this may explain why 75% of the market is dominated by four players (Thorman, n.d.).

Because this industry’s capital requirements only consist of computing technology such as servers and no other physical equipment, the capital requirements are lower when compared to other distant industries such as oil & gas exploration, or even related industries such as healthcare delivery. If money is flowing out of healthcare and technology sectors, these companies will experience a correlated difficulty in bonding and stock issuance.

On the whole, the technology industry is quite volatile and merger & acquisition (M&A) activity is quite high. Liquidity problems due to macroeconomic crisis may lower the level of M&A activity. This means that for technology giants, the barrier to entry is low when you take into account the capital of Microsoft (MSFT) or Google (GOOG). Siemens (SI), McKessen (MCK) and GE (GE) are all capable of moving against existing market holders with their larger sizes. The key factor is, switching costs, the remaining portion of EHR implementation space and the rate of growth the leaders carry. These larger entrants would need to act very nimbly or have to use M&A activity to buy market space.

Because of market newness, there is not much of a heavily developed customer preference base, although Epic leads in overall adoption and there are reports that doctors by and large determine that it will be used (forbes tough millionaire). The industry is still lucrative for mergers, because of the total $20 billion available for meaningful use criteria. However, Frost & Sullivan still expect revenue to become saturated in 2016 at $1.4 Bn dollars (Lewis, n.d.).

Frost & Sullivan find that the market has entrenched players. While this is true, it cannot be ignored that one-man startups have taken disproportionate market share within two years such as PracticeFusion. Yet PracticeFusion seems to be an exception to a rule. With high switching costs, capital requirements and merger & acquisition activity, this is a market which has a low overall threat of entrants and high barriers to entry.

Substitution threats. The research logic indicates that the threat of substitutions is low and desirable for existing EHR players and helps mitigate benefits for new entrants.  EHR software is already a market segment of software and the only conceivable other products would be existing software which is not engineered for that purpose. Buyer switching costs are high, there is a medium level of product differentiation and there are no viable solution products. This product is not easily substituted due to deployment and a high switching cost. IT investments are a costly venture and the risk in deployments is why that acts as a switching cost deterrent. This explains the “entrenchment” of the current market players. There is no quality depreciation in software, but our theme of rapidity in technology change means that applications with more intuitive features and enhancements will force productivity gains requiring updates. This low threat of substitute products results in low risk for existing players.

Buyer power. The research finds buyer power to be low. In the EHR industry 4 firms have 75% of the market of an estimated 650, 000 physicians (of which many do not meet meaningful use criteria).  This constitutes an oligopoly. Buyers have medium concentration and there are a variety of relatively new products to choose from with little reputation. Buyer volume is relatively low and the switching costs of buyers relative to the supplier are quite high. Buyers also have very little information as well as non-existent substitute products. There are moderate differential advantages of these products compared to similar industry products. Price sensitivities of buyers are currently low because the incentive payments of ARRA which act as subsidies. This puts the risk of buyer power as low.

Supplier power. The bargaining power of suppliers in EHR vendors is relatively high. Labor and hardware are the only inputs: that being IT and computing capital. Replacing skilled knowledge workers is expensive as IT workforces are expensive to assemble due to requiring varied skill sets (Leifer, R., White, K., n.d.).  The impact of hardware and labor has been empirically measured as very high in determining product quality (Wixom & Watson, 2001). Reports actually indicate a scarcity of EHR licensed and educated workers (Boyle, n.d.).   This means that firms with excellent recruiting and lean operational processes are best poised for success because supplier power is high.

Rivalry. The oligopolic structure and the number of products competing has resulted in many products which have each found a small niche, but whose operating and business are so lean that they do  not require many providers (“After Tough Year, M&A Market Begins to Rebound,” n.d.).  This would account for there being 3 tiers of customer sizes and the high switching costs. All of these create what is described as a very crowded market, prone to mergers & acquisitions shaping how it will look: “We are going to see a natural consolidation and vendor rationalization is happening across the board, as the Cerners and the Epics move in and take larger and larger market share [from the 300 total vendors]” (Lewis, n.d.)

The Five Forces Summary

Threat of New Entrants Supplier Power Substitute Products Buyer Power Rivalry
Low High Very Low Low Very high
Deeper reliance and integration will only cement current players market position. Contracting with hardware suppliers and recruiting are key (Epic’s 2% acceptance rate is evidence).   Buyers see similar products with only reputation of existing players influencing choice A crowded domestic market

Follow the blog in a few weeks to see part 2 of this analysis: the financial performance of these firms to find the strongest players. Part three will then examine the relationships between the financials and the 5 forces framework to determine what the most successful players are doing in their internal business models for success.

Reference

After Tough Year, M&A Market Begins to Rebound. (n.d.).Health Data Management Magazine. Health Data Management Magazine, . Retrieved December 14, 2011, from http://www.healthdatamanagement.com/issues/18_6/after-tough-year-ma-market-begins-to-rebound-40376-1.html

Boyle, A. (n.d.). EHR implementation could be hurt by shortage of IT professionals. Modern Medicine. Retrieved December 14, 2011, from http://www.modernmedicine.com/modernmedicine/Modern+Medicine+Now/EHR-implementation-could-be-hurt-by-shortage-of-IT/ArticleStandard/Article/detail/744923

CCHIT Certified products. (n.d.).CCHIT. Retrieved December 6, 2011, from http://www.cchit.org/products/cchit

Eder, S. (n.d.). Frost & Sullivan Report Finds that U.S. Hospitals Significantly Ramp up Use of Electronic Health Records. Retrieved December 9, 2011, from http://emrdailynews.com/2011/10/20/frost-sullivan-report-finds-that-u-s-hospitals-significantly-ramp-up-use-of-electronic-health-records/

Lamont, J. (2010). Data drives decision-making in healthcare. KM World, 19(3), 12-14. doi:Article

Leifer, R., White, K. (n.d.). Information systems development success: perspectives from project team participants. MIS Quarterly, 10(3), 215-223.

Lewis, N. (n.d.). EHR Revenue To Hit $3 Billion In 2013. Informationweek. Retrieved December 9, 2011, from http://www.informationweek.com/news/healthcare/EMR/227200057

Overview of International EMR/EHR Markets. (n.d.). Retrieved from http://www.accenture.com/SiteCollectionDocuments/PDF/Accenture_EMR_Markets_Whitepaper_vfinal.pdf

Porter, M. E. (1980). Industry Structure and Competitive Strategy: Keys to Profitability. Financial Analysts Journal, 36(4), 30-41. doi:Article

Singh, S., & Sawhney, T. (2006). Predictive analytics and the new world of retail healthcare. Health Management Technology, 27(1), 46-50. doi:Article

Sittig, D. F., Wright, A., Meltzer, S., Simonaitis, L., Evans, R. S., Nichol, W. P., Ash, J. S., et al. (2011). Comparison of clinical knowledge management capabilities of commercially-available and leading internally-developed electronic health records. BMC Medical Informatics & Decision Making, 11(1), 13-21. doi:Article

Swab, J., & Ciotti, V. (2010). what to consider when purchasing an EHR system. hfm (Healthcare Financial Management), 64(5), 38-41. doi:Article

Thorman, C. (n.d.). EHR Software Market Share Analysis – Software Advice Articles. Retrieved December 3, 2011, from http://blog.softwareadvice.com/articles/medical/ehr-software-market-share-analysis-1051410/

Wixom, B., & Watson, H. (2001). An empirical investigation of the factors affecting data warehousing success. MIS Quarterly, 25(1), 17-41.

Simple trend following strategy to protect your 401k

This is a very easy and actionable way to protect your 401k. With record amounts of money being put into the system, I would have to agree with hedge fund manager Michael Dever that there is no such thing as a free lunch. I fear the people who don’t monitor their accounts with diligence could easily wind up working longer toward retirement if they fail to take time and action. What follows here is a strategy to preserve your wealth in up or down markets.

Strategy

Rules:

Enter in on 20 day high with either on balance volume or the accumulation / distribution line confirming. These can be found in most charting packages of your retirement websites.

Exit when a trailing stop is hit. This is a stop / exit point charted below your entry point. The trailing stop is based on volatility using the bottom chart, it uses the Average True Range indicator. The other exit is last known resistance points or lows. This strategy exits on whichever is lower. On an exit from one, we immediately enter the other. Whenever a position is entered, a stop must be made using 2 ATR units (2 x the ATR #)

A standard hold strategy with the S&P would have gotten us about 9% right now YTD – following this would yield about 25%. What we do is use an inverse ETF that captures the exact negative of the S&P on down legs. In effect, you are always moving sideways or up. For this we use the SPY ETF which captures the S&P 500.

Here is the breakdown:

1.       We enter SPY on “1 in” above, as it is a 20 day high. 1A confirms volume. Later 1B does as well.

2.       SPY moves up and then we get stopped out at “2”. When we get stopped out of SPY we go into SH (the short inverse) on “2 in”.

3.       On “3” we get stopped out of SH. We move back into SPY.

4.       After catching the monster wave of 2012, we would get stopped out at 4. We would move into SH at “4in”

5.       We would still be riding this trend on the way down right now for a healthy yearly gain.

There are ways of course to optimize this – you could be less aggressive and instead of flipping 100% of your trade you could do 80/20 or 70/30. Or of course, if you were really adept, you would have ID’d the strong growth stocks and hit them on the up move, then on the down move, short the heck out of weaklings.

Disclaimer:

  1. I am a completely independent analyst and am not paid by any company of which the asset(s), securities, investments, ETFs, mutual fund, commodity or currency I cover or write articles about.
  2. However, I may have long or short position in any asset class in item 1, which I may write about at any time.
  3. This research is NOT a guarantee. This article was written to provide investor information and education, and should not be construed as a guarantee or investment advice.
  4. This advice or strategy may not be applicable to you. Every individual has a varying time frame, risk threshold, tax strategy and price target. It is the discretion of the reader to determine if this strategy meets your investment goals and I assume no liability for you using your own discretion.
  5. This research might not pertain to you. I have no idea what your individual risk, time-horizon, and tax circumstances are: please seek the personal advice of a financial planner. I assume no liability for losses, taxes, gains or any other monetary change resulting from using this advice when investing.
  6. Information and data provided may contain errors. My articles use company releases, government filings, third-party data, and academic research. These may contain approximations and errors. Please check estimates and data for yourself before investing.
  7. My ratings and/or analyses of an asset as defined by item 1 only represent my personal view on the asset and/or my assessment on the probable movement of the asset price in the future. They are by no means a guarantee of performance on any long or short trades on an asset as defined it item 1 and should not be relied upon solely for buying or selling an asset, nor is past performance a guarantee of future performance.
  8.  All content is subject to change without notice. Information is obtained from sources believed to be reliable, but its accuracy and completeness are not guaranteed.
  9. Potential investors should read the entire investment prospectus and, in particular, in considering the prospects for the Company, investors should consider all the information contained therein and the risk factors that could affect the performance of the Company. Investors should professional advice from a licensed investment adviser prior to taking any action.
  10. Individuals making any legal claims on the basis of these findings will be subject to court and attorney fees incurred by those claims.

Creating a Culture of Compliance in Healthcare IT Teams

Introduction

             This research examines how Healthcare IT (HIT) organizations can achieve regulatory compliance in their processes and operations, specifically operations related to Electronic Medical Record (EMR) applications. EMR compliance largely depends on the Health Insurance Portability and Accountability Act (HIPAA) because EMRs use data. HIPAA standards are noted as being “considerably more complex and controversial than those for data standardization” (Field, 2007, p. 199).  The HITECH Act places paramount importance on HIPAA and increases penalties and oversight of personal health information (PHI) (“HITECH Act Enforcement Interim Final Rule,” n.d.).

EMR use at the same time is vital for competitiveness, resulting in a unique situation: “Hospitals face a catch-22 situation in responding to the conflicting mandates of developing electronic health records that allow information sharing across institutions versus ensuring absolute protection and security of patients’ individual health information” (Sarrico & Hauenstein, 2011, p. 86). Risk reward strategies are further complicated when HITECH funding subsidizes EMR use (“HHS extends MU Stage 2 deadline to spur faster EMR adoption,” n.d.).  HIT Organizations must then use EMR to stay in the market competitively while at the same time minimizing risks from HIPAA regulation.

Some methods of minimizing regulatory sanction risk include: data collection & retention, sharing & transmission and reporting. Research shows that data collection & retention software development quality may be improved using Six Sigma processes (Grant & Mergen, 2009).  In addition to implementing Six Sigma, HR policies and incentives can align people to the processes to ensure compliance. The next section highlights how each risk management goal can be achieved.

Goals

            Technology represents new opportunity but as in most circumstances, new opportunity carries new risks. These goals limit HIPAA compliance risk by controlling PHI flows. The usual consequence of controlling information flows hampers productivity. The strategies contained here seek to control PHI and yet sustain collaboration in a secure fashion. These goals are largely in the realm of software development but also fall under IT policy.

Data Collection & Retention

            Any breach of PHI under HIPAA is subject to a fine, which sometimes may be sever enough for jail time (“HITECH Act Enforcement Interim Final Rule,” n.d.).  A starting point in avoiding breaches of PHI can be started from the collection and retention policies of data. First, ensure that variances in the data collection processes are removed. Removing variances leads to higher data quality in systems and by improving quality, EMR data warehouses will then have lower costs due to interoperable datasets (Chordas, 2001). To further reduce collection risks, the data can be collected automatically through interoperable devices rather than by a human (Conley et al., 2008).

Creating training programs and policies which educate employees to always mark PHI and thereby designate it for information architects will further enhance PHI security. This policy could be enhanced and driven through setting performance goals (Mello, 2011).  In addition to training on PHI designation, financial incentives can be applied to drive that behavior. Such incentives can even be financed through the EMR subsidies provided by the HITECH law or by payer reimbursement strategies (Lewis, n.d.).  Software developers and CIOs can supplement the training and incentives by ensuring that software contains specific PHI public classes for object oriented programs.

The newly marked PHI can be set aside in retention policies so that architects set granular permissions. Permissions could be granular with the public PHI object classes or through global security groups. The granularity should occur at the field level within a record rather than in a record itself which is the current standard (and responsible for may breach possibilities). This model allows the same record to be viewed by many parties but certain parties would see PHI and others would not. Then during the retention of data, PHI can easily be manipulated, deleted and archived by user groups with appropriate permission levels. These retention policies would be created and implemented by IT but enforced in coordination with HR and senior leadership.

Sharing & Transmission

            HITECH funding and the Patient Protection and Affordability Care Act (PPACA) provide for greater use of EMR applications and claims data sharing at an interstate level (“Summary of New Health Reform Law,” 2011).  Because PHI data-breach sanctions are higher, this creates a problem for the required data transmission and sharing. In some circumstances, physicians simply did not access the EMR so they had no culpability in the legal system (Sarrico & Hauenstein, 2011).

The previous section tackled a set of policies, incentives and software development practices to reduce internal data breach risk. The root of the problem in sharing lies in an economic problem known as asymmetric information, a situation where one party knows more or less than the transacting party and therefore is unable to appropriately manage risk (Levitt & Dubner, 2009).  To manage this risk those liable for sanctions must increase their knowledge PHI data containment procedures in sharing organizations and companies.

Implementing this could be achieved by ensuring that prior to data transmission the recipients data architecture is validated through the standards established in the sender’s organization. Any type of sharing should be approved and managed by a designated privacy official, likely as an arm of the CIO (Miller & Cross, 2011).  If the validation by these privacy officials and policies fail, then it should be reported to regulators that PHI breaches could occur, legal departments should be alerted and data transmissions would be stopped. Metrics would measure this compliance and reporting with each time a sharing process is initiated and pay / incentives would be adjusted to reflect for this to drive behavior (Mello, 2011).  Such validation could also be built into the software development lifecycle as an output to be measured for quality, resulting in far greater quality in the production process rather than in the testing process (Grant & Mergen, 2009).  In short, we have risk reduction through designated privacy officials, reporting driven pay for performance and software implemented safeguards.

Reporting

            Another important aspect to prevent data-breaches is having the knowledge that breaches could occur. One way to achieve this is through reporting mechanisms. Firstly a “just culture” must exist where a disciplinary system does not punish associated parties when self-reporting is involved (Marx, 2001).  There exists the very distinct problem that if the HIT team belongs in a  matrix organization and a just culture does not exist that problems will be resolved by inaction due to lack of accountability (Davis & Lawrence, 1978).

Therefore, all types of organizations would benefit from IT creating a hotline and web portal where reporting of violations could occur without fear of retaliation and in an anyonomys fashion. This would allow for the learning of how deviations from policy or practice had become the norm (Marx, 2001).  As a manager once stated, “we don’t know what we don’t know.” This type of reporting allows for a knowledge sharing of those unknowns which would otherwise be impossible.

One way to operationalize this reporting is to make it a policy violation in itself to not report. In effect, reporting data-breach possibilities or infractions becomes a larger infraction than the actual data breach. Due to the steep sanctions that employers pay, such a proactive approach will be far less costly than an actual infraction. Maintaining anonymity would ensure that reporting occurred more often, especially coupled with the fact that not reporting or knowing or someone who did not report could result in termination.   

Conclusion

            Implementing policies, incentives and software development practices in data collection, retention, transmission and reporting can all help to ensure HIPAA compliance. Most of these goals drive toward broad access by the process of ensuring “de-identifications” and thereby eliminating the traceability of PHI to an individual (Field, 2007).  The PPACA calls for increased data sharing but the leveraging of complex analytics for population based studies should not require identified information to create the predictive models required (Singh & Sawhney, 2006).

The de-identification occurs during the data collection and retention processes in part by simply removing access on a granular level while still maintaining the overall integrity of the data. Data collection can be automated where possible to reduce variances. Ancillary to these policies, data collection can ensure that PHI is designated so that software development objects can appropriately interact with it.

Transmission of EMR data carries obvious risks since the data is now being seen by more potential unauthorized parties. Since the holder of the data is the one liable, HIT organizations should ensure that recipients have the same data architecture and data access in place as the sending organization. To help aid this KPIs should measure data transmission.

Internal reporting mechanisms of data breaches and data breach possibilities needs to be created as a proactive stop loss measure. Essential to this is the creation of a just culture where reporting infractions and problems are of most importance. This reporting must be anonymous when necessary and free from retaliation

These three measures serve to lower HIPAA risks and thereby maximize HITECH subsidies. In using simple risk-reward financial analysis, HITECH subsidies can be used to drive appropriate HIPAA security policies rather than paying a disproportionate amount in sanctions. The lower risk results in a higher reward than a situation where subsidies are simply used on EMR purchasing and system implementation with a heightened possibility of sanctions. Best of all, many of the described software development processes add value early in the lifecycle and end up in lower costs upon application deployment.

 

 

 

 

 

 

 

 

 

 

 

 

Reference

Chordas, L. (2001). Building a better warehouse. Best’s Review, 101(11), 117. doi:Article

Conley, E., Owens, D., Luzio, S., Subramanian, M., Ali, A., Hardisty, A., & Rana, O. (2008). Simultaneous trend analysis for evaluating outcomes in patient-centred health monitoring services. Health Care Management Science, 11(2), 152–66.

Davis, S. M., & Lawrence, P. R. (1978). Problems of matrix organizations. Harvard Business Review, 56(3), 131–142. doi:Article

Field, R. I. (2007). Health care regulation in America : complexity, confrontation, and compromise. Oxford; New York: Oxford University Press.

Grant, D., & Mergen, A. E. (2009). Towards the use of Six Sigma in software development. Total Quality Management & Business Excellence, 20(7), 705–712. doi:Article

HHS extends MU Stage 2 deadline to spur faster EMR adoption. (n.d.).Healthcare IT News. Retrieved December 5, 2011, from http://www.healthcareitnews.com/news/hhs-extends-mu-stage-2-deadline-spur-faster-emr-adoption

HITECH Act Enforcement Interim Final Rule. (n.d.).U.S. Department of Health & Human Services. Retrieved March 3, 2012, from http://www.hhs.gov/ocr/privacy/hipaa/administrative/enforcementrule/hitechenforcementifr.html

Levitt, S. D., & Dubner, S. J. (2009). Superfreakonomics : global cooling, patriotic prostitutes, and why suicide bombers should buy life insurance. New York: William Morrow.

Marx, D. (2001). Patient Safety and the “Just Culture”: A Primer for Health Care Executives. Medical Event Reporting System for Transfusion Medicine. Retrieved from http://www.unmc.edu/rural/patient-safety/tools/Marx%20Patient%20Safety%20and%20Just%20Culture.pdf

Mello, J. A. (2011). Strategic human resource management. Mason, Ohio: Thomson/South-Western.

Miller, R., & Cross, F. (2011). The legal environment of business. Mason, Ohio; Andover: South-Western ; Cengage Learning [distributor].

Sarrico, C., & Hauenstein, J. (2011). Can EHRs and HIEs get along with HIPAA security requirements? hfm (Healthcare Financial Management), 65(2), 86–90. doi:Opinion

Singh, S., & Sawhney, T. (2006). Predictive analytics and the new world of retail healthcare. Health Management Technology, 27(1), 46–50. doi:Article

Summary of New Health Reform Law. (2011, April 15). Kaiser Foundation. Retrieved from http://www.kff.org/healthreform/upload/8061.pdf

 

Improving Quality in a Matrix Organization

Intro

            The concept of a just culture is defined by Thaden as “an atmosphere of trust, encouraging and rewarding people for providing essential safety-related information. A
just culture is also explicit about what constitutes acceptable and unacceptable behavior” (Thaden, Hoppes, Li, Johnson, & Schriver, 2006, p. 964).  Other researchers find that self-reporting and peer reporting of errors is a key part of policies to improve quality and improve safety (Marx, 2001).  This paper examines how such a culture can be created, maintained and the benefits associated with such a culture in the framework of a large matrix organization.

Organizational Aspects to Improve Quality and Safety

            A key part of succeeding in creating a culture that values error reduction, and to a larger extent overall quality, is creating an organization which is made aware of mistakes, learns from them and improves upon them. Matrix organizations, such as large corporations, are inherently plagued with this because of the lack of accountability due to a thick layer of middle managers with competing or non-aligning priorities. The resulting lack of accountability and error reporting means that managers may frequently exercise inaction. If the status quo is maintained, they are not chastised. If they were to take charge, a mistake could happen and their career would be in jeopardy.

For improvement to occur in a matrix organization, it must be policy and incentivized that error reporting is mandatory. For more success, not reporting known errors would be subject to discipline and would constitute a violation of policy itself (Marx, 2001).  This would begin to create a culture where communication and gathering knowledge about systems is paramount. These policies must be “logic-based” and at its core could consist of a simple checklist to facilitate quality (Gawande, 2010).

Having logic-based policies means that the enforcement and effects of the policy are decided upon through a logical sequence of events in context. In crafting a policy as such, you create a system which is modular and flexible. It is modular in that smaller components can be removed or added without negating the validity of the remaining framework. It is flexible in that the decision logic exists so that the policy accommodates a variety of frequently and previously encountered situations.

Creating and Maintaining the Culture

            To enact these flexible policies, the first step is to get relevant stakeholders to agree that they need to have coordination and information sharing to improve value (Lee, 2010). Once buy-in is garnered the next step is to ensure stakeholders communicate and ensure managers follow on these policies through a shared and communicated mission statement. A few ways to further ensure that a quality based culture exists could be random internal audits, reimbursements for quality, coaching after errors occur, anonymous reporting to an unconnected matrix dimension and fairness to avoid the perception that there is a seniority bias when reporting errors (Thaden et al., 2006).

Executing on Culture

            There are a few ways the above measures could be executed on in a large matrix organization. Davis and Lawrence identify inherent problems in the matrix organization: anarchy, power struggles, groupitis, overstaffing, internal focus and decision strangulation (Davis & Lawrence, 1978). In light of the above problems, the prescribed self-reporting mechanisms should be created to mitigate these problems or alert executive management to their presence. Paradoxically, if these problems exist, then error reporting to create the just culture may not be possible unless the roadblocks are removed.

Such reporting measures should not only measure individual errors but also observed matrix dimension weaknesses: excessive meetings with no decisions, a focus on an internal initiative which duplicates efforts elsewhere or elements of the matrix exerting authority without any clear organizational links to each other. Reporting of these and other measures should be voluntary as these symptoms Davis and Lawrence mention indicate organizational problems which directly affect quality and thereby impact shareholder equity / financing costs.

Benefits

            The most obvious benefit which error reporting gives is that it begins to hold people and organizations accountable. Mistakes in the production process could be documented, shared and learned from across company lines. Training & development can create training modules appropriate to job families depending on the task error(s) occurring (Mello, 2011).  Reporting of quality to not only the organization with the defects but other impacted organizations would ensure that matrixed elements remain accountable to quality rather than choose inaction or covering up known issues. Not only should the negative be focused on, but it is important that effective, quality improving processes and steps be reported on, shared and acted upon organization wide.

Roadblocks

            A matrix organization may have power struggles and actions being decided by the leverage of political capital rather than organizational strength. It could be possible that a Quality Board act as a functional matrix dimension itself. Error reporting intake, knowledge sharing and instituting organizational process changes rather than managerial personnel changes is very key to this success.

Having a Just Culture encourages behavior and what is acceptable. For these to truly come to fruition, the reporting mechanisms need to be anonymous when necessary with rewards for quality reports and never a fear of reprisal. Departments would maintain their appropriate focus and the Quality Board would ensure that impacted organizations were made aware of their errors. This shared transparency would increase accountability and make inaction less lucrative to managers.

Conclusion

            Implementing a Just Culture depends on having proper reporting mechanisms in place and laying a groundwork where that can exist. Matrix organizations present unique management problems inherent in themselves. As such, the tactics of creating a just culture are dependent on crafting a strategy which works in the matrix environment and is optimized to deal with matrix-specific problems. The first step in addressing quality problems is an analysis of the environment, created in part by error reporting. The second step is proper diagnosis through reports or alongside existing knowledge. The final step is using the diagnosis to alleviate the problems through training & development, monetary schemes, policy or shifting matrix dimensions themselves to improve reported and confirmed problems.

 

References

Gawande, A. (2010). The checklist manifesto : how to get things right (1st Picador ed.). New York: Picador.

Lee, T. (2010). Turning Doctors into Leaders. Harvard Business Review, 88(4), 50–58.

Marx, D. (2001). Patient Safety and the “Just Culture”: A Primer for Health Care Executives. Medical Event Reporting System for Transfusion Medicine. Retrieved from http://www.unmc.edu/rural/patient-safety/tools/Marx%20Patient%20Safety%20and%20Just%20Culture.pdf

Mello, J. A. (2011). Strategic human resource management. Mason, Ohio: Thomson/South-Western.

Thaden, T., Hoppes, M., Li, Y., Johnson, N., & Schriver, A. (2006). THE PERCEPTION OF JUST CULTURE ACROSS DISCIPLINES IN HEALTHCARE. PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 50th ANNUAL MEETING. Retrieved from http://www.humanfactors.illinois.edu/Reports&PapersPDFs/humfac06/The%20Perception%20of%20Just%20Culture%20Across.pdf

 

 

 

 

 

EHR Interoperability and Big Data Opportunities

Healthcare delivery utilizes IT in various measures with lack of consistent standards, measures and quality; this in analogous to the disparity seen across regional healthcare systems and differing clinics due to the fragmented evolution of the US healthcare system. The use of electronic health records (EHR) systems is increasing and may help with quality. Various roadblocks remain in the full and consistent implementation of such systems in a coordinated fashion.

Current Healthcare IT & EHR Problems

            Providers do not find optimal use of EHR systems because of the amount of work involved and the lack of quality measurement built into most EHR systems. Further exacerbating the problem is the fact that most EHR systems do not have fully interoperable data sets. It is well known that interoperability is key when using data sets (Pro, 2009).

Manual chart abstractions.

            Many EHR systems do not have the full capability for valuable quality reporting. It has been found that many providers must manually pull data and chart it themselves (“Quality Measurement Enabled by Health IT: Overview, Possibilities, and Challenges,” 2012). This presents a cost in two ways. Labor overhead is higher and margins decreased. Further, the extraction of data to local hosting means that any such reporting data is not shared with the larger community.

Lack of quality measure standardization.

            There are various consensuses among payers, regulators and providers as to the variables and the data sets which constitute quality. Because the measure varies and is not agreed upon each quality production by EHR systems will then result is what is known as a “curve fitting” in analysis. Data is fitted to conform to a specific discretionary definition rather than a wider ecosystem which would represent a more true diagnosis of technical indicators. Quality measure then varies and its representation is inconsistent.

Lack of data harmonization.

            Consistent with the theme of fragmentation and inconsistency, EHR systems are frequently unable to “speak” to each other. Some organizations share information, while others do not. Even if they choose to, the problem with EHR vendors is that their data sets are not compatible nor do the applications necessarily interface with each other. This results in an incomplete picture of population based data. The more interoperable the data is, the more robust it can be to change variables without creating large standard deviations in projections (Conley et al., 2008).

Improvement Methods

            The good news is that much is this is being improved upon. Each of the three above problems can be solved through payer, provider and consumer coordination. The easiest way for a large payer would be to integrate the EHR products data to be consumed into a shared web-service.

Shared web-services. EHR services do not share their data with each other but hold it within the application. Web services allow queries to be run to fetch data and SOAP Web services offer encryption and authentication. A participating provider could provide access to their EHR web-services. A weekly XML or SQL build derived from individual EHRs would aggregate many providers EHR data into one. The data would be categorized to standard constrained dimensions during a SQL injection / update. Once the dimensions are constrained, variants of other models can be created by an introduction of a new variable from the data set (Obenshain, 2004). The quality of this new payer dataset would be much larger than what a small clinic or even regional system could ever hope to fetch. Below is how that could be done with a motivated payer:

Figure 1
Figure 1. Step 1: Data from the consumers personal EHR, all participating providers and the payer’s claim data. There are slight variations on the data in each of them allowing for both financial and diagnostic quality measures but data is often incomplete or bad. Step 2: Data is used to update the big data warehouses. SOAP web services call the data as XML through queries. Data is cleaned and constrained depending on the EHR product. Inaccurate data is more easily revealed through the use of all data sources simultaneously versus the individual. The clean data is injected to a SQL Teradata wrehouse. Step 3: The big data has APIs depending on the type of user or agency accessing the application. Each API contains respective public and private objects, classes and services.

 

Claims data enhancements. Claims themselves can provide powerful data such as identifying complications, episodic length and medical outcomes (Bertsimas, D; Bjarnadóttir, M; Kryder, J; Pandey, R; Vempala, S; Wang, G, 2008). Others claim data to be leveraged include: age, gender, race, ICD-10 / ICD-9 codes, claim volume per time period, procedures and price. By utilizing this data and aggregated EHR data, a company such as Optum could provide some of the most robust, powerful analytics in the market because it would have multiple ways to predict and observe trend divergences in patients, regions and hospital systems. See Appendix One to see this demonstrated.

Pattern recognition. In trend analysis, a divergence of trends is where two trend lines travel in differing directions. EHR and claim data are not frequently used in conjunction due to their data differences. But even a few common variable anchors can begin to tie certain gender, ethnicity and age ranges to EHR quality measurements. Combining this data could reveal claim to EHR divergences for on individual depending on region, provider or hospital system (Devoe, McIntire, Puro, Chauvie, & Gallia, n.d.). See Appendix one to see this demonstrated. Appendix two shows some examples of what a divergence could appear as.Below is what something could be developed if the data was there for practical uses:

Figure 2. Users can spot divergences in many measures against the aggregated data available to their user group. In this example, the clinic would be alerted to a divergence in a biometric of a certain treatment group. The bottom chart shows how users could plot out medical procedures in a temporal fashion by comparing ICD-10, EHR and the consumer’s history. This is done here graphically by adding simple event dots. In this example it is seen that the clinic not implementing a procedure which was standardized elsewhere resulted in a deviance of this biometric number.

 

Deployment methodology. Payers could create an incentive for providers to share their data through existing HITECH meaningful use regulations or through ACO / Shared-risk payment models. Another incentive would be to offer this aggregated web services product for free or at a discount. The benefit to the business purchasing the product is that they would have a dual-approach (claim and aggregate EHR data) robust data warehouse to retrieve quality outcomes from. Such enhanced data could immediately benefit the payer by providing information on the validity of a claim or a procedure attempting to be provided through prior authorization predictive modeling. Quality measures tied to unnecessary procedures coupled with non-payment would reduce waste and fraud by providers.

Conclusion

            There is still work being done from many stakeholders on how to improve Healthcare IT quality with EHRs at the forefront. HITECH funding is subsidizing these systems and modernizing many. However, many of these systems still do not have a shared standard and if value is not produced from their use government subsidies will not make such use sustainable. Payers have the opportunity to make a long term value investment for all parties by aggregating EHR data alongside claim data to create robust data warehouses.

 References

Bertsimas, D; Bjarnadóttir, M; Kryder, J; Pandey, R; Vempala, S; Wang, G. (2008). Algorithmic prediction of health-care costs. Operations Research, 56(6), 1382–1392,1553,1555,1557.

Conley, E., Owens, D., Luzio, S., Subramanian, M., Ali, A., Hardisty, A., & Rana, O. (2008). Simultaneous trend analysis for evaluating outcomes in patient-centered health monitoring services. Health Care Management Science, 11(2), 152–66.

Devoe, J., McIntire, P., Puro, J., Chauvie, S., & Gallia, C. (n.d.). Electronic health records vs. Medicaid cl… US National Library of Medicine National Institutes of Health. Retrieved August 4, 2012, from http://www.ncbi.nlm.nih.gov/pubmed/21747107

Obenshain, M. (2004). Application of data mining techniques to healthcare data. Infection Control and Hospital Epidemiology, 25(8), 690–695.

Pro, R. (2009). Analytics Can Improve Outcomes. Health Management Technology, 30(10), 27. doi:Article

Quality Measurement Enabled by Health IT: Overview, Possibilities, and Challenges. (2012, July). Agency for Healthcare Research and Quality.