RETURN TO SENDER: Aetna to Pay $17M to Settle Claims Related to Vendor Mailer Data Breach

Aetna has agreed to pay $17.2 million and to implement a “best practices” policy regarding sensitive policyholder data, in order to settle class action litigation brought against it arising from a mass mailing sent by one of its mailing vendors. As discussed in a blog post last year, federal class action litigation was brought against Aetna and its mailing vendor in 2017 based on the vendor’s use of glassine envelopes to communicate HIV medication information to Aetna insureds. The envelopes revealed that the named addressee was contacted about options for filling HIV medication prescriptions. The litigation alleged violations by Aetna and its vendor of several laws and legal duties related to security and privacy.

The contract lessons for customers and vendors that arise from the events in question, which were identified in the earlier post, remain the same. Do your contracts for non-IT and non-healthcare services fully consider the risk of privacy and security litigation? Do your contract’s indemnification and limitation of liability clauses contemplate the possibility of class action litigation? Before entering into a contract, have you considered whether the specific vendor services being provided to the particular customer in question implicate laws you hadn’t considered? And, Have you considered which specific aspects of vendor services may directly impact potential legal liability, and have you adequately identified and addressed them in the contract?

Importantly, the newly announced settlement, itself, provides three bonus lessons.

Published data breach cost statistics are helpful, to a point. 

In its 2017 Cost of Data Breach Study, Ponemon Institute reports that the average per capita cost of data breach in the U.S. for the period of the study was $225. It also reports that, for the same period, the average total organizational cost in the U.S. for a data breach was $7.35 million. Somewhat remarkable, as part of its settlement Aetna agreed to pay $17.2 million in connection with the breach in question – a figure that is about $10 million over the average reported by Ponemon Institute. But, Aetna’s payment is not out of the ballpark, as averages are averages, after all. Much more remarkable, however, is the per capita settlement amount. Aetna’s settlement figure represents a per capita amount of $1,272 – that number is more than five times the reported average. (For reference, that per capita cost would put Equifax’s settlement number for its recent breach at $185 billion dollars.) Bottom line, when considering or counseling clients as to the financial impacts of data breaches, the average cost figures for data breaches are as important as the qualification of the figures, themselves, as only averages (with any number of data security breaches costing more, or less, than the averages).

Data breach cost statistics often do not compare well with litigation settlement amounts. 

Yes, Aetna agreed to pay $17.2 million as part of the settlement, as compared to Ponemon Institute’s reported $7.35 million average U.S. data breach cost. While the $7.35 million figure includes forensics costs, customer churn, post data breach costs, and other direct and indirect expenses, the $17.2 million figure is not as comprehensive. It does not include, for example, Aetna’s legal fees incurred to defend and settle the class action litigation, nor does it include other pre-settlement costs and expenses incurred by Aetna. As efficient or helpful as it may be to compare published per capita or per breach data statistics with litigation settlement amounts, it’s also important to identify the full scope of costs and expenses that the published statistics include, as well as what costs and expenses are not included in the settlement amounts.

Data breach cost statistics and litigation settlement amounts don’t include non-monetary settlement obligations. 

Cost-per-record, cost-per-breach, and litigation settlement figures can be particularly meaningful and relatable, especially when considering or counseling clients as to the potential financial impacts of data security breaches. Notably, however, the material obligations of defendants settling data breach litigation matters typically are not limited to monetary payments. Aetna, for example, as part of its settlement, also agreed to develop and implement a “best practices” policy for use of certain personally identifiable information, to provide policy updates for five years, to provide policy training for certain Aetna personnel for five years, and to require outside litigation counsel to sign business associate agreements, among other commitments. These activities will require Aetna to incur additional costs and expenses, including costs and expenses for internal and, possibly, external resources in connection with the performance of these activities.

Supplementing the earlier post on this Aetna class action litigation and lessons learned, the recent Aetna settlement and the new lessons cited above provide an even fuller picture of data and security breach and related contract considerations. Not only is it invaluable to consider data privacy and security issues in contracts and the roles of vendors and service providers, it also is important to consider and counsel clients as to the full potential impacts of data breaches, including potential litigation settlement amounts, costs and expenses in addition to settlement amounts, and non-monetary settlement-related obligations.

Lessons Learned: Vendor Sued in Class Action Suit for Security Misses

You’re thinking that something about the title of this post sounds familiar, right? Information technology (IT) vendors and third party service providers have been in the spotlight for security breaches for some time (see, for example, vendor-based security lapses affecting Target, CVS, and Concentra, as just a few), and it doesn’t sound surprising that an IT vendor has been sued related to a security incident. After all, whether you’re an IT vendor or an IT customer, if you draft or negotiate contracts for a living, these situations are what you try to contract for, right?

Right…but…the recent federal class action suit filed in Pennsylvania against Aetna and its vendor surfaces several new privacy and security considerations for vendors and their customers. The vendor in question was not an IT vendor or service provider. Instead, the plaintiff’s allegations relate to Aetna’s use of a mailing vendor to send notification letters to Aetna insureds about ordering HIV medications by mail. According to the complaint, the vendor used envelopes with large transparent glassine windows – windows that did not hide the first several lines of the enclosed notification letters. The plaintiff asserts that anyone looking at any of the sealed envelopes could see the addressee’s name and mailing address – and that the addressee was being notified of options for filling HIV medications. As a result, the vendor and Aetna are alleged to have violated numerous laws and legal duties related to security and privacy.

For all vendors and service providers, but especially those that don’t focus primarily on privacy and security issues, the Aetna complaint is enlightening. To these vendors and service providers, and to their customers: Do your customer-vendor contracts and contract negotiations contemplate what Aetna and its mailing vendor may not have?

  • Do your contracts for non-IT and non-healthcare services fully consider the risk of privacy and security litigation? A noteworthy facet of the Aetna case is that the mailing vendor was sued for privacy and security violations that were not exclusively due to the customer’s acts or omissions. That is, while the contents of the mailer certainly were key, the vendor’s own conduct as a mailing services provider (not an IT or healthcare provider) was instrumental in the suit being filed against the vendor (and Aetna). Vendor services that previously didn’t, or ordinarily don’t, warrant privacy or security scrutiny, may, after all, need to be looked at in a new light.
  • Do your contract’s indemnification and limitation of liability clauses contemplate the possibility of class action litigation? Class action litigation creates a path for plaintiffs to bring litigation for claims that otherwise could not and would not be brought. Class action litigation against data custodians and owners for security breaches is the norm, and the possibility and expense of class action litigation is frequently on the minds of their attorneys and contract managers who negotiate contracts with privacy and security implications. But, for vendors and service providers providing arguably non-IT services to these customers – the idea of being subject to class action litigation is often not top-of-mind.
  • Before entering into a contract, have you considered whether the specific vendor services being provided to the particular customer in question implicate laws you hadn’t considered? Vendors that operate in the information technology space – and their customers – generally are well-aware of the myriad of privacy and security laws and issues that may impact the vendors’ business, including, as a very limited illustration, the EU General Data Protection Regulation, HIPAA, New York Cybersecurity Requirements, Vendors that aren’t “IT” vendors (and their customers), on the other hand, may not be. For example, the Aetna mailing vendor may not have contemplated that, as alleged by the Aetna plaintiff, the vendor’s provision of its services to Aetna would be subject to the state’s Confidentiality of HIV-Related Information Act and Unfair Trade Practices and Consumer Protection Law.
  • Have you considered which specific aspects of vendor services may directly impact potential legal liability, and have you adequately identified and addressed them in the contract? No, this is not a novel concept, but it nonetheless bears mention. A key fact to be discovered in the Aetna litigation is whether it was Aetna, or the vendor, that made the decision to use the large-window envelopes that, in effect, allegedly disclosed the sensitive and personally identifiable information. Given the current break-neck pace at which many Legal and Contract professionals must draft and negotiate contracts, however, unequivocally stating in a contract the details and descriptions of every single aspect of the services to be provided is often impractical (if not impossible). But, some contract details are still important.

Whether or not this class action suit is an outlier or is dismissed at some point, consider data security and other privacy and security issues in contracts and how vendor or service provider conduct may give rise to a security breach or security incident.

Anti Anti-Virus

In July 2017, Bloomberg reported that the anti-virus and security company Kaspersky Lab has been cooperating with the Russian Federal Security Service (FSB), the name of the Russian counterintelligence agency and successor of the KGB, since 2009.   On September 13, 2017, the US federal government mandated that all software made by Kaspersky Lab be removed from government computer systems.  Retailers such as Best Buy are also taking steps to remove Kaspersky Lab’s products from their retail offerings.

Kaspersky Lab issued a response, claiming that it has done nothing wrong and is merely a pawn in a political game between the US and Russia.  Russia responded to the report by urging Russian companies to only use Russian software.

Although it’s unlikely we will ever have a definitive answer about whether Kaspersky Lab is gathering data for the Russian FSB, this incident highlights a growing concern that foreign governments might be collaborating with software and hardware companies to spy on other governments, corporate enterprises, and consumers.   How can companies protect themselves in this environment?  Consider five things:

  1. A company should have a plan in place to quickly install a replacement if it discovers that software or hardware in its environment has been compromised.  Often this means maintaining a list of alternative providers and, when possible, having a contact at those alternative providers in case a purchase needs to be made quickly.
  2. Prior to making a purchase, conduct a search of news and industry reports on the brand and product to find any stories that might raise a red flag.
  3. After making a purchase, set an online news alert with the product name and “spy,” “spyware,” “malware,” “security issue” and similar terms in the search field (however, this doesn’t work well for network security or anti-virus products, since nearly every news story about those products contain these terms).
  4. Subscribe to security sites, such as or, that track potential security issues affecting enterprises and consumers.
  5. In particularly egregious circumstances, unplug the software or hardware so it stops collecting and transmitting information, but first be aware of how that will impact your other systems.

While there is no surefire way to identify a software or hardware vendor intent on stealing information, these steps can help mitigate damages by notifying companies of any known or suspected issues.  In the end, staying current on security risks is one important factor in defending your company, and yourself, against cyber-mischief.

The Future of Data Privacy: You Can Run but You Can’t Hide (or Can You?)

In Ernest Cline’s dystopian novel Ready Player One, the world’s population is addicted to a virtual reality game called the OASIS. The villain in the book is a large communications company named IOI that will stop at nothing to rule the world—the OASIS virtual world, that is. IOI’s motivation is, simply put, profit, profit, and more profit as it peddles its goods and services in the digital reality. Through subterfuge, spying, rewards, and an assortment of other tactics, IOI gathers intelligence on its users, competitors, and enemies, and then uses that information to its advantage.

But even in a fully-connected, always-on digital world such as the OASIS, people have effective tools against IOI’s tracking. They lie. They throw up roadblocks. They create alternate selves. They create private rooms to hold clandestine chats. They go underground. They disconnect.

In a 2013 survey by Pew Research Center, 86 percent of Internet users stated that they had attempted to minimize their digital footprints by taking affirmative steps such as deleting cookies, using a false name or email address, or using a public computer to mask their identities.1 A 2015 survey by TRUSTe/National CyberSecurity Alliance found that 89 percent of consumers refuse to do business with a company that does not protect their privacy.2 Those are just two of dozens of surveys showing similar metrics.3

In response to users’ privacy concerns over the past decade, consumer-friendly privacy protection tools continue to make their way into the marketplace. For example, VPN privacy protection add-ons are now readily available for web browsers, and some browsers, such as Opera, come with a free VPN built directly into the browser.4 Ad blockers have become so popular that some websites are restricting access if a browser blocks ads on the site.5 And privacy-conscious search engines like DuckDuckGo continue to gain loyal users.6

So what does this have to do with the legal intricacies of data privacy? A lot, actually. As demand increases for privacy tools, more companies are meeting that demand in new and innovative ways. Although the privacy risks inherent in artificial intelligence (AI) are well-documented, we are also seeing companies develop AI applications designed to help protect consumer privacy by creating digital noise, or obfuscation, around a person’s online activities. These tools essentially create new layers of false interests and pretend preferences tied to an individual’s online persona, which makes it more difficult for marketers to know which preferences and opinions are true and which are false.7  Expect to see a variety of AI-powered obfuscation and other related tools and services arriving over the next few years as consumers attempt to distract data collectors from real data.

Whether or not these new tools and services are legal will be the subject of much debate, especially by any company being thwarted in its efforts to collect reliable information about a user. Some of these tools will also present novel legal issues related to AI, such as whether an unmonitored chatbot can create a legal contract on behalf of its owner (probably) or whether the owner of an AI tool is always responsible for its activities, even if the AI tool acts contrary to its owner’s instructions (maybe). Then there are the questions of who’s guarding the guards and whether these new privacy tools will eventually be used to collect even more information from consumers.8

In the future, we will certainly see new legislation, regulations, and court holdings affecting how companies and third parties may use personal information of individuals. But technical innovation is much faster and more responsive to consumer demand. As consumers desire better protection for their information, expect to see more privacy tools emerge to help control the types and amounts of data shared with companies and marketers. And as this develops further, these new tools will undoubtedly bring new legal questions and challenges.

This article was originally published in Best Lawyers Business Edition, Summer 2017, p.23.


8In 2016, a popular browser add-on ironically named “Web of Trust” was discovered to be collecting and selling information about its users (see In 2017, an inbox management service called was sued for selling user data gleaned from users’ inboxes (see

Getting Your Data Back – a Hostage Crisis?

One of the key differences between a cloud computing delivery model and a customer-hosted solution is the service provider, not the customer, possesses the customer’s data under a cloud computing delivery model. At the end of such a relationship the customer needs its data returned. Many service providers’ form agreements, however, do not address when and in what format the data will be returned. Given the vital importance of data to a company’s business, a customer should address this issue prior to entering into such an agreement.

What seems like a relatively simple provision to implement can sometimes lead to surprisingly protracted discussions. Customers often request their data be returned at expiration or termination of the contract (or during the termination / expiration period) in the format requested by the customer. Service providers’ concerns with such a requirement is the customer might request a format that is different than that being used, resulting in expensive and time-consuming file conversion. Or the customer might request some of its data in paper and electronic format, requiring the service provider to print reams of paper. These concerns lead service providers to counter with a provision requiring the service provider to return the data in its then-current format.

This typically leads to the customer raising its concern that the data could be returned in a format that is no longer compatible with the customer’s systems, requiring the customer to undertake the expensive and time-consuming conversion process and causing a material adverse impact to the customer’s business.

What’s the right answer? Each negotiation will be different depending on factors such as the importance of the data, the leverage of the parties, and the amount of data at issue. Service providers, however, must be sensitive to the customer’s concerns of the data being the customer’s lifeblood, and the customer not wanting to be held hostage at the end of the relationship. I’ve seen parties eventually agree that the service provider must return the data upon expiration or termination in a format reasonably usable by the customer at no additional cost to the customer, or in a format reasonably requested by the customer and commonly used in the industry based on the type of data.


Negotiating Cloud Contracts

We’ve all heard the phrase. Cloud vendors speak it in somber, authoritative tones as frustrated customers grumble and curse. The phrase? “Sorry, we don’t make changes to our standard contract.”

A virtual fight has been brewing over “the phrase” in the last few weeks. The first volley was fired by Bob Warfield in a blog post called “Gartner: The Cloud is Not a Contract” where Mr. Warfield argues that it’s perfectly reasonable for a cloud provider to use “the phrase.” In fact, he says, if a cloud provider doesn’t say it at every opportunity, the provider risks becoming *gasp* a mere datacenter. He says:

What [the Cloud] is about is commoditization through scale and through sharing of resources which leads to what we call elasticity. That’s the first tier. The second tier is that it is about the automation of operations through API’s, not feet in the datacenter cages.

* * *

Now what is the impact of contracts on all that? First, a contract cannot make an ordinary datacenter into a Cloud no matter who owns it unless it addresses those issues. Clouds are Clouds because they have those qualities and not because some contract or marketer has labeled them as such. Second, arbitrary contracts have the power to turn Clouds into ordinary hosted data centers: A contract can destroy a Cloud’s essential “Cloudness”!

* * *

How do we avoid having a contract destroy “Cloudness?” This is simple: Never sign a contract with your Cloud provider that interferes with their ability to commoditize through scale, sharing, and automation of operations. If they are smart, the Cloud provider will never let it get to that stage.

Mr. Warfield goes on to argue that any deviation to a cloud provider’s contract that impacts scale, sharing, or automated ops essentially destroys the benefit of cloud computing, and results in turning a cloud contract into a managed data center contract. In other words, if a provider is not a “pure” cloud provider, they are a datacenter provider.

Lydia Leong at Gartner quickly escalated the battle, responding in a blog post entitled “The Cloud and Customized Contracts.” Ms. Leong counters that cloud providers should be careful in using “the phrase” since it might not align with their business goals. At the same time, she also cautions that customers looking for substantive customizations to a cloud offering might undermine the cost savings they are seeking:

[A] cloud provider has to make decisions about how much they’re willing to compromise the purity of their model — what that costs them versus what that gains them. This is a business decision; a provider is not wrong for compromising purity, any more than a provider is right for being totally pure. It’s a question of what you want your business to be, and you can obtain success along the full spectrum. A provider has to ensure that their stance on customization is consistent with who and what they are, and they may also have to consider the trade off between short-term sales and long-term success.

* * *

Customers have to be educated that customization costs them more and may actually lower their quality of the service they receive, because part of the way that cloud providers drive availability is by driving repeatability. Similarly, the less you share, the more you pay.

* * *

. . . I believe that customers will continue to make choices along that spectrum. Most of them will walk into decisions with open eyes, and some will decide to sacrifice cost for customization. They are doing this today, and they will continue to do it. Importantly, they are segmenting their IT portfolios and consciously deciding what they can commoditize and what they can’t. . . . [U]ltimately, the most successful IT managers will be the ones who be the ones that manage IT to business goals.

So should a customer negotiate a cloud contract or not? As Ms. Leong states, it depends on the customer’s business demands. If the business demands the lowest cost and is willing to bear additional risk, then a non-negotiated “pure” cloud contract might be best. On the other hand, if the business demands that costs and risks be balanced, or that risk mitigation take priority over cost savings, then a negotiated contract is likely the best option.

Fortunately for customers, market forces are already influencing cloud providers to make their contracts more detailed and customer-friendly. In a recent article about cloud predictions for 2011, journalist George Lawton writes:

As cloud providers compete for new customers, many will begin to extend more elaborate guarantees, concrete remedies and better data transit awareness. The guarantees will provide better legal protection on the control of data. Confident providers will also include more detailed service-level agreements (SLAs) and financial remedies, covering all aspects of the cloud service, that could affect the customer’s business performance. Cloud providers will also offer to provide improved visibility into the movement of data to maintain legal requirements.

If this trend continues and cloud providers include reasonable protections for customers in their standard contracts, then hearing “the phrase” might not be so bad after all. In the meantime, customers must continue to balance cost savings with risk mitigation, and negotiate (or not) accordingly.

Is Your Data in the Cloud Backed Up and Recoverable?

In 2011, Acronis, a backup and recovery solutions provider, launched a Global Disaster Recovery Index for small and medium-sized businesses to measure IT managers’ confidence in their backup and recovery operations. Notably, businesses in the United States scored poorly in their confidence in their ability to execute disaster recovery and backup operations in the event of a serious incident, either in their own environment or a third-party cloud environment.

As companies move various functions to a cloud environment, they can increase their confidence by contractually agreeing to data backup and recovery requirements with their cloud providers. Indeed, customers can specify, as a service level or other contractual requirement, the (a) recovery point objective (“RPO”), which is the point in time to which the provider must recover data, and (b) recovery time objective (‘RTO”), which defines how quickly the provider must restore the data to the RPO.

Too often, however, companies sign cloud agreements without clearly specifying these metrics. Indeed, when a disaster or disruption occurs, many companies are surprised to find their contracts silent on these metrics, and the cloud provider operating under a much less stringent RPO and RTO than the company expected.