S. P. Havens Consulting

Software and information systems specialists

Until recently, I have used Arvixe for some web hosting. In the wake of the Heartbleed disclosure, scans showed that www.arvixe.com, along with other subdomains, was vulnerable to Heartbleed. Arvixe updated their systems, as they should, but, several weeks later, still hadn’t revoked and replaced their SSL certificates, a necessary step for recovering from the Heartbleed vulnerability. I used Arvixe’s tech support chat to ask about Arvixe’s plans for completing the repairs. The responses surprised me. The following is the support thread involving six different Arvixe representatives, unedited except for stripping post footers and my personal email address.

Hrishikesh W

Staff

Posted on: 09 May 2014 07:52 AM

Hello,

Kindly note, I am now escalating your issue to the concerned department. They will investigate the issue and update you accordingly.

You do not need to reply to this email unless you have any further questions or any additional information to provide.

James G.

Staff

Posted on: 09 May 2014 10:25 AM

Hello,

What scan are you using that says there is a vulnerability?

http://tif.mcafee.com/heartbleedtest?utf8=✓&q=https://www.arvixe.com&commit=Scan

http://tif.mcafee.com/heartbleedtest?utf8=✓&q=+https://lapwing.arvixe.com&commit=Scan

The great majority of our Linux servers like lapwing are using earlier versions of the OpenSSL libraries than the ones vulnerable to the bug. Where necessary, we have taken all precautions to secure vulnerable servers.

The version of Linux we use on our shared hosting does not support SNI. If you want to use SSL, then you can purchase a certificate and IP address for your domain. The shared SSL certificate, which I think you are referring to as “the wrong place,” is there for our customers to use if they want to.

If you have any other questions, please let us know.

Regards,

James G.
Jr. Technical Operations Officer

Scott Havens

Posted on: 09 May 2014 11:00 AM

Hi James,

Thanks for the quick response. The ticket contents were taken out of context and/or poorly summarized from my conversation with the first-line tech support, so I’ll try to reexplain here to make sure there’s no confusion.

I understand that the current configured OpenSSL library on the servers is no longer vulnerable to a fresh Heartbleed attack. At one time many of Arvixe’s servers were configured with a vulnerable version. I know this included the main www.arvixe.com site (which, of course, includes billing), although I didn’t check at the time whether lapwing was included in that set. Forum posts from Arvixe staff and online scanners like the one you pasted confirm that the software has since been patched.

Unfortunately, simply patching the servers with a non-broken version is not sufficient to fix the effects of Heartbleed. Any server that at any point in the past was vulnerable is still compromised, even if scans show it has been patched. More specifically, this means that the certificate used for SSL was publicly exposed, permanently compromising all future security relying on that private key. Thus, the second critical step in repairing Heartbleed is to revoke existing SSL certificates and replace them. Until this is done, Heartbleed, and Arvixe’s exposure to it, has not been fixed.

The fact that Arvixe does not seem to be aware of this (at least from what I’ve seen or heard) or have a plan to fix this is a cause for concern. Can you address this?

–Scott

P.S. Regarding the SNI/SSL part of the ticket, that was a separate aside to the tech support personnel about how https://sphavens.com forwards to a non-sphavens.com domain due to the IP-based certificate, as opposed to returning an error. It is unrelated to my Heartbleed concern.

James G.

Staff

Posted on: 09 May 2014 02:40 PM

Hello,

As I mentioned before, the majority of our Linux servers never had OpenSSL upgraded to a vulnerable version. I repeat, they were never upgraded to a vulnerable version/library. Therefore, they were never vulnerable to the bug.

For those servers that were affected, “we have taken all precautions,” and I mean all precautions. We appreciate your concern and are well aware of the situation.

If you have any other questions, just let me know. Thanks!

Regards,

James G.
Jr. Technical Operations Officer

Scott Havens

Posted on: 09 May 2014 03:39 PM

Hi James,
I am happy to take you at your word that the majority of the Linux servers were never vulnerable. However, you did have several that were, including at least some shared Linux hosts (according to http://forum.arvixe.com/smf/general/openssl-critical-vulnerability/msg62525/#msg62525), and definitely the main Arvixe site. It is simply demonstrably not true that Arvixe has taken all precautions with those servers, as the SSL certificates in place have not been replaced. You can see for yourself by going to https://www.arvixe.com and examining the certificate for the site. It was issued on 2012-03-19, more than two years before Heartbleed was patched. Do you have a plan in place to complete this work?
Thanks,
Scott

Ryan C

Sr. Tech Operations Officer / Lead System Admin
Staff

Posted on: 09 May 2014 09:55 PM

Hello,

I did want to point out, that in regards to the reference of our own servers being effected by it, the direct www.arvixe.com or the main specific core server hosting the information was never actually impacted by this exploit due to the version of software being ran on the box. Said shared servers cannot simply just have their SSL replaced, as that could cause great issues for some website owners.

Ryan C
Sr. Technical Operations Officer

Scott Havens

Posted on: 10 May 2014 10:35 PM

Hi Ryan,

You state that www.arvixe.com was never impacted due to the version, yet scans from roughly a month ago showed it as vulnerable. Are you stating that the scans were incorrect, and at no point did any server with the cert for www.arvixe.com have OpenSSL 1.0.1 through 1.0.1f installed with TLS Heartbeat enabled?

–Scott

Erik Solomonson

Staff

Posted on: 12 May 2014 07:48 AM

Hello,

Its odd. We got the notice of the scans saying we were vulnerable. But we never were and we checked very thoroughly, but the scans continued to say we were vulnerable.

This was a very serious vulnerability but many of the scanners seemed to default on the side of “vulnerable”, so my best guess is that the scans were wrong. I can say this because we have yet to be affected by an actual attack on any of our SSL certifications. when you have a paypal and credit card based business people, credit card companies and paypal are very quick to notify you of the use that crops up. The theft that happend to Target was done via an organized gang that found a weak wifi link and gianed escalations and held onto the cards they had. That kind of restraint has yet to happen with a released vulnerability as the criminals would scramble to use the cards and accounts they got before competing criminals used the information.

But we always appreciate customers who challenge us and question our security practices so thank you for your comments on this matter.

–Erik

Scott Havens

Posted on: 12 May 2014 09:48 AM

Hi Erik,

Thanks for the reply. I’d like to address several specific concerns I have with it.

First, Heartbleed scanners (at least the several I’ve seen) do not operate via heuristics that produce some subset of false positives and/or false negatives depending how they’re tuned. They operate by actually crafting the attack and checking the result. There are myriad reasons for some type of “not vulnerable” result, categorized differently by different scanners. However, a “vulnerable” result means just one thing. It does not mean that it’s only probable that it is vulnerable — it means the server was explicitly checked for the bug and was successfully exploited.

There are other sites whose operators believed fervently that they had checked the OpenSSL version on the server yet were still flagged as vulnerable. The reasons ended up including:
• Load balancers or SSL endpoints that had not been updated
• Libssl library being a vulnerable version even if the OpenSSL package doesn’t seem to be
• The appropriate processes not being restarted after updating
• Using mod_spdy, which included the vulnerability
• Some other static binaries using a specific (vulnerable) version

At one point, repeated scans against arvixe from multiple sources demonstrated the active vulnerability. Later, scan results from those sources changed to show arvixe as no longer vulnerable. The logical conclusion is that the search for the source of the vulnerability missed one of the above causes (or one I have not listed), which has since been corrected.

Finally, the threat model for Heartbleed is by no means limited to criminals looking for credit card information to immediately exploit. Credit card fraud is the least of my concerns, as it is easily detected and fixed. Most of the attacks possible with exposed private keys are much more difficult to detect, prevent, or repair, particularly when other potential mitigating factors like perfect forward secrecy are not in place. I highly encourage you to reexamine that threat model.

In summary, I continue to be gravely concerned that Arvixe appears to continue to misunderstand the cause of, scope of, and steps necessary to correct the Heartbleed vulnerability. This support thread has not assuaged my fears; if anything, it’s confirmed them. In light of this thread, I hope Arvixe reevaluates both its specific approach to addressing Heartbleed (including replacing SSL certificates on once-exposed systems) and its general approach to similar problems in the future (including requiring a more thorough and detailed postmortem and contacting customers of steps they need to take, such as changing passwords, in light of exposures).

While I realize I haven’t asked specific questions in this reply, I would love for you to address any or all of it.

–Scott

Erik Solomonson

Staff

Posted on: 12 May 2014 10:06 AM

Thanks.

Could you show me some of these scanners please and I would be happy to check. I know there was some issues with scanner users possibly committing the “crime” of exploiting a vulnerability but since I have permission to scan arvixe property I would be happy to test with the scanners that were referred to.

–Erik

Scott Havens

Posted on: 12 May 2014 10:53 AM

Hi Erik,

I am aware of only two categories of Heartbleed scanners:
• Those that are run locally on a system to identify installed packages that match a predetermined list of known vulnerable versions.
• Those that can be run remotely that detect the vulnerability via actively exploiting it.

In other words, every web site I’ve seen that perform scans, including the McAfee site listed earlier in the thread by your colleague, functions by actively exploiting the bug. They don’t rely on heuristics simply because it’s easy and (save for possible bugs in the scanner itself) it’s definitive to perform the exploit directly.

Second, while I obviously encourage you to rerun scans on servers to which you have access, the servers I’ve seen (including www.arvixe.com) appear to have been patched, and thus will no longer be flagged by the scans. However, to repeat myself, this does not mean the problem is resolved. If the server was ever vulnerable, then the secrecy of the private key has been compromised. Resolving Heartbleed requires replacing the SSL certificate after patching the software. If you don’t, then your security remains compromised even if you are no longer vulnerable to fresh Heartbleed attacks.

My questions, then, specifically, are:
1. Do you acknowledge the certainty, the probability, or even the possibility that some set of your servers, including particularly www.arvixe.com, were at any point exposed to Heartbleed?
2. Do you acknowledge that the Heartbleed vulnerability is not resolved until the appropriate SSL certificates have been replaced?
3. If ‘yes’ to 1 and 2, do you have a plan in place to replace the necessary certificates?
4. If ‘yes’ to 1, do you have a plan in place to notify customers to change their passwords and other relevant secure information (either immediately, or after replacing the certificates)?

Thanks for your help.

–Scott

Erik Solomonson

Staff

Posted on: 12 May 2014 11:32 AM

1. slim possibility. I would place money on “no” if forced to in one of those philosophy 101 improbably scenarios.

2. Its a vulnerablility with OpenSSL and not SSL itself, Certs do not need to be changed.

3. If 2, therefore no.

4. No. The probability of this being true does not require a plan to be in place since financial information is not stored locally.

Scott Havens

Posted on: 12 May 2014 12:22 PM

Wow. So, Arvixe’s position is this:
1. It is extremely likely that all scanners that claimed to successfully craft an exploit against Arvixe were lying or had bugs, and now that Arvixe is showing as no longer vulnerable it is because the scanners were all later changed or fixed.
2. It is extremely unlikely that the Arvixe team missed any of the OpenSSL vectors, and, further, Arvixe can confirm that they explicitly checked every single one of the uncommon or unexpected ones I listed.
3. Everyone in the world who has successfully demonstrated stealing SSL private keys via Heartbleed is either lying or mistaken.
4. It is not worth contacting customers notifying them of the risk of exposed information because of the confidence Arvixe has in their position and of the relative unimportance of the secured data.

I find it hard to believe you are really prepared to stand by this position. I hope that you stated it as a gut response and are willing to take a step back and address it rationally. I’m taking the time to try to help you out here; I understand software security is difficult for everyone and the complexity means that it’s far too easy for anyone and everyone to slip up once in a while. I’m not trying to assign blame as to why it happened; I just have an interest in getting the problems fixed.

Can we take a step back and try approaching this again?

Erik Solomonson

Staff

Posted on: 12 May 2014 12:34 PM

I appreciate your concern but is there anything I can do for you? What do you want me to do from a technical support level? That is what I am here for. If you have an issue that needs debating my supervisors are often on the forums and I know this was talked about there.

So if you have a problem I would be happy to help. If you want to discuss arvixe’s attitude toward OpenSSL and heartbleed we would be happy to use our forums to discuss this.

–Erik

Scott Havens

Posted on: 12 May 2014 12:56 PM

My direct technical support problem is that the security of my communications and accounts with Arvixe, including but not limited to my password and my credit card information, is now and will continue to be compromised unless action is taken by Arvixe to correct it. This problem is presumably shared by most or all of your customers, but I can’t speak for them.

This is not an issue I’m trying to debate or policy I’m trying to discuss; it is a technical problem I am trying to fix. Given the breadth of the problem, I understand that, as a technical support personnel, you probably are not in a position to fix it directly. I’ve tried to provide all the information necessary to explain exactly why it is broken and what would constitute a successful fix, particularly in light of the fact that the problem doesn’t seem to be well understood and Arvixe appears to be under the misconception that it has been fixed already. My expectation of technical support, and thus my request of you, is to escalate the ticket to someone who is able to take that information and fix it.

Arvixe System Suppo…

Automated Messenger

Posted on: 12 May 2014 01:10 PM

Dear Valued Customer,

This is an automatic response to inform you that this ticket has been reviewed by our team member Erik S. Erik has not been able to resolve your ticket yet. However, Erik has referred your ticket to other admins to assist.

Patrick Stein

Staff

Posted on: 14 May 2014 12:01 AM

Hello Scott,

Thank You for reaching out to us in regards to this matter. I am a member of the Quality Assurance/Management Team here at Arvixe. I would be happy to address your concerns as best I can. As others have stated to you our Servers save for a very very small minority never ran versions of OpenSSL that was vulnerable. So we do not need to update anything on our end at this point. However even if we did, the nature of shared servers makes it very difficult to actually update those SSLs as it would disrupt several services and customers.

As for the actual Shared Servers non of them ran a vulnerable version as we run older versions that are backported from the core CentOS Repos. Once it was noted that the vulnerability did exist we updated all our Servers to the latest stable versions of OpenSSL.

Please let me know if you have further concerns or questions.

Patrick Stein
Quality Assurance Team

Scott Havens

Posted on: 14 May 2014 08:03 AM

Hi Patrick,

Thanks for your response. I understand that Arvixe believes most of its servers were never vulnerable. However, there is overwhelming evidence that your technical team is mistaken, at least when it comes to the primary www.arvixe.com site. No one with whom I have talked so far has addressed this, so I will summarize again.

Repeated scans from multiple sources showed www.arvixe.com as being vulnerable. Heartbleed scans do not work by just looking for some sort of pattern — they literally perform the exploit themselves. Let me repeat that: multiple sources successfully exploited Heartbleed multiple times against www.arvixe.com. If you are claiming that www.arvixe.com was never vulnerable, you are also literally claiming that each of those scanners was either intentionally lying or had a bug that misreported the results. You are also claiming that every scanner was later simultaneously fixed to be correct once Arvixe no longer appeared vulnerable.

To further support that Arvixe was mistaken, there is a laundry list of OpenSSL vulnerability vectors, the complete list of which few people were aware, and all which apply even if OpenSSL appears to be the correct version:
• Load balancers or SSL terminators that had not been updated
• Libssl library being a vulnerable version even if the OpenSSL package doesn’t seem to be
• The appropriate processes not being restarted after updating
• Having mod_spdy installed, which included the vulnerability
• Some other static binaries using a specific (vulnerable) version

If you are claiming that www.arvixe.com was never vulnerable, you are also claiming that your technical team explicitly checked every one of these possibilities and could not have missed any of them.

In summary, there are two possible scenarios: (A) the creators of every known Heartbleed scanner were colluding to make Arvixe look bad for a brief period of time (or all had the exact same bug in their scanner for the same brief period of time), or (B) Arvixe happened to miss one of a poorly understood yet quite extensive list of vectors to check. Thus, I have two questions:
1. Which do you think is more likely?
2. Unless you can demonstrate that (A) is definitely true and (B) is definitely false, wouldn’t it be an appropriate mitigation step to replace the SSL certificate on www.arvixe.com?

–Scott

Michael Carr

Staff

Posted on: 14 May 2014 11:04 AM

Scott,

Whatever scans you ran reported false positives. The version of OpenSSL that we run on Arvixe,com were never vulnerable to attack. I can not comment on the reasons for this unless I can see the actual tests that were performed against the server.


Michael Carr
Quality Assurance Manager


Scott Havens

Posted on: 14 May 2014 11:22 AM

Hi Michael,

I don’t know how much more clear I can be: Heartbleed scans do not produce false positives. They directly exploit the server. The only way to produce a positive is if the server is successfully exploited. The server will not respond with a positive result unless the server is vulnerable. False negatives are possible, but false positives are not. If you understood how Heartbleed works on a technical level, you would understand this. I am happy to walk you through it, step by step, but there are other sources available that may be more convenient for you.

Further, as I’ve said, there are many reasons that you may think you were running only a non-vulnerable version but have been incorrect. I listed several. Arvixe still has not confirmed that you checked every single one.

My two questions from my last post remain unanswered.

–Scott

Michael Carr

Staff

Posted on: 14 May 2014 01:24 PM

I am very well versed in security and know exactly how heartbleed works, I am not going to argue with you. If you want to share with me the specifics of the test you ran I will be glad to take a look at it. Our systems were not running the version of OpenSSL that was impacted by heartbleed, of course we ran our own checks against this serious exploit when it was released. We do not ignore security threats.

The SSL certificate on arvixe.com is nearing expatriation and I am having it updated now. Even if our server would have been vulnerable, and someone was able to steal our private key (highly unlikely as we never restart it) Replacing the certificate would be the only thing left for us to do.


Michael Carr
Quality Assurance Manager


Scott Havens

Posted on: 14 May 2014 01:51 PM

Hi Michael,

Given that you know how Heartbleed functions and how Heartbleed scanners work, then you must know full well that it is effectively impossible for multiple scanners, multiple times, to independently produce false positives. This includes the scans that you know Arvixe itself performed. I don’t see a need to debate that point further, as we both know that you understand full well that the results were not false positives.

I do not believe you ignore security threats, nor did I question whether you ran at least some checks against the exploit. I questioned whether your checks were sufficiently complete. I offered a specific list of checks that are additionally necessary, beyond the obvious, to confirm that all vectors were covered. If even one of those was not performed, then Arvixe’s own checks were not complete. I suspect you realize this. I have asked multiple times for a simple yes or no answer as to whether every item on that list was explicitly checked, and every time the response has avoided answering that question. If the answer is “no”, then, as you know, you cannot claim with certainty that you were not vulnerable.

I am happy that the SSL certificate is finally being replaced. It is the action I have been requesting from the beginning. That said, I’m disappointed that it has taken this long. I have now talked with six different Arvixe representatives, including you, with very specific requests, very specific questions, and very specific evidence for my requests and questions. At every stage, my questions and requests have either gone completely unanswered, or answered with bald assertions that ignore and don’t even attempt to address the evidence. It does not give me confidence in Arvixe’s ability to secure servers in the future.

–Scott

Patrick Stein

Staff

Posted on: 14 May 2014 10:44 PM

Hello,

Thank You for the update, I am sorry to hear that you feel that way however we can only state to you the facts coming from our end and the facts are we never ran versions that would be vulnerable. We would have no reason to hide it even if we did as we strongly believe in transparency with our customers and fellow staff.

If you need anything else do let us know.

Best Regards,

Patrick Stein
Quality Assurance Team

Scott Havens

Posted on: 14 May 2014 10:54 PM

Hi Patrick,

Yet again, that is simply an assertion that is not even attempting to address the overwhelming evidence to the contrary, which I’ve presented repeatedly and which has been ignored in every response. I’ll try focusing on a single yes-or-no question: can you confirm that Arvixe explicitly checked every single Heartbleed vulnerability vector I listed previously?

–Scott

Scott Havens

Posted on: 14 May 2014 11:06 PM

Let me just be blunt: look, everyone involved in this conversation knows that the scan results weren’t false positives, and everyone involved knows that Arvixe did not actually check every single vulnerability vector I listed. That’s why you’re refusing to answer the questions directly. I understand that. I’m not looking to shame you guys. I’m sure you’re working hard to do the best you can. Most people didn’t realize some of those vectors even existed until well after the fact; no one knows everything, particularly with security. I just want you guys to admit that it slipped by you, but that you realize this and want to fix it. These my interest in this: fixing the problem now, and trying to give you the chance to reestablish some trust that you’ll do the right thing.

Patrick Stein

Staff

Posted on: 14 May 2014 11:56 PM

Hello,

>>can you confirm that Arvixe explicitly checked every single Heartbleed vulnerability vector I listed previously?

Yes, we have check all Vulnerabilities related to the HeartBleed Bug.

As for your other reply, I like Michael am not going to argue with you. I have present to you the fact as the exist on our end, We simply did not run any versions that would be vulnerable. HeartBleed did not slip by us, nor did any of the vulnerabilities that came with it.

Best Regards,

Patrick Stein
Quality Assurance Team

Scott Havens

Posted on: 16 May 2014 12:55 PM

I don’t get it. Since you know how Heartbleed works, you guys know, for a fact, that exploits were actively and successfully performed against www.arvixe.com. Yet you continue to insist that it was absolutely impossible for your team’s review to have missed even one thing.

Well, you can’t say I didn’t try.

Any organization that relies on more than one system or application will find the need to integrate those systems and applications. Yet, as common as this need is and as many software packages have been released to help the process, a lot of teams still lack an understanding of the fundamental process of performing application integration. They may deliver a solution that transfers data between systems, but often the systems will end up tightly coupled and fragile as a result. Successful integration projects don’t just transfer data from system A to system B; they also allow each system to independently change and grow without breaking anything.

So, to help, I’ve put together a cheat sheet. It breaks down the application integration process into a handful of primary steps and offers some suggestions for what to keep in mind for each step. It’s a contract-first development process.

  1. Define the contract. Establish precisely and explicitly the minimal information each system needs and when. Confirm that the partner system can and will provide this information under the proper circumstances. This is the contract.
    • How should each datum be identified? Integer, string, GUID?
    • What values/fields are needed in each datum, in the form of name and data type?
    • What are the semantics of the data, beyond the names and basic data types of the fields?
      • What do the normalized values look like? E.g.
        • Are all datetimes in UTC?
        • Have domain-specific values (e.g. product names) been canonicalized?
      • How would the data be categorized?
        • Commands
        • Queries
        • Query results
        • Telemetry
        • Notifications
    • How long must the data be retained by the source system?
      • If the data is transferred once, must it be transferrable again?
    • What are the necessary performance characteristics for the contract?
      • Frequency
      • Load
      • Latency
      • Throughput
  2. Build the interface that implements and represents those contract details.
    • The key is that, whatever it is, the interface:
      • explicitly represents the contract to the best of its technical abilities, and
      • hides the implementation details in the partner system from the source system.
    • Different techniques and patterns are available at this stage, including but not limited to:
      • Database tables, views, or functions with the appropriate field names and datatypes
      • Pub/sub queues containing messages matching an appropriate schema
      • Direct connect web services
  3. Connect each system to the interface. Each system will be attached independently because the interface is a clean break between the systems.

If you find this useful, or have any suggestions or changes, please let me know.

The Extended Mind Thesis, or EMT, is the proposal that the human mind isn’t limited to “stuff happening in your brain”, but includes parts of the external world, including tools that assist in cognitive processing. Andy Clark and David Chalmers famously[1] argued that, as far as the definition of ‘the mind’ is concerned in the following scenarios, there’s no qualitative difference between:

  • counting on one’s fingers versus counting in one’s head,
  • multiplying with a hand-held calculator versus a calculator jacked directly to the brain, or
  • memorizing a set of directions versus an amnesiac writing it down in a notebook.

Let’s assume you’ve read the paper, put some thought into it, and are convinced — you buy that your mind isn’t 100% contained within your skull. Should you accept that your mind extends out even as far as, say, the Internet? Clark and Chalmers anticipated this question. In 1998, they found it implausible, because the Internet wasn’t ubiquitous and it would be unusual for someone to rely on it to the same extent as the amnesiac’s notebook. Fifteen years later, though, we’re in the age of Google, of the cloud, of a smartphone in every pocket. It’s hard to argue that Internet-based tools wouldn’t be subsumed in the EMT nowadays.

At this point, if you’ve kept up with current events at all, you already realize where I’m going with my titular claim of psychics in the employ of the state. But let’s take a minute to run a thought experiment.

Imagine a government agency that secretly has developed the ability to telepathically listen to and record your mental processes. If you, for example:

  1. Realize you’re pregnant
  2. Calculate the due date for your pregnancy
  3. Decide your budget for next several months
  4. Make a mental note to pick up something from the store
  5. Think about all the other errands you need to run
  6. Figure out the most efficient way to get to all the stores today
  7. Wonder if you’ll have time to see a movie tonight
  8. Try to remember the name of a movie you saw last year
  9. Fantasize about a celebrity from that movie

Then this agency, with its mind reading capability, would have recorded all of it. Presumably such an agency does not exist, so let’s just call it No Such Agency, or NSA for short. If this NSA’s activities ever came to light, I suspect most people would find it morally repugnant, no matter how noble an excuse government officials offered.

Here’s the rub: If you’ve been following current events about the government’s surveillance activities, and you accept the EMT, then this isn’t just a thought experiment to you. Our cognitive processes, our thoughts, our minds are literally being read when we do a web search, set a calendar reminder for ourselves, or create a todo list saved in the cloud.

Michael P. Lynch wrote recently about how government invasions of privacy can be dehumanizing and strip people of their autonomy. For those who accept the EMT, you don’t have to settle for abstract concerns about ‘autonomy’. You arrive straight at a concrete, “crazy-but-true” conclusion: government psychics are reading our minds.

[1] Famous is relative when we’re talking cognitive philosophy.

2011.09 smoothie
Photo of fruit in blender by Sigurdas [CC-BY-SA-3.0], via Wikimedia Commons

Many years ago, there was a wealthy family in poor health. The local doctor examined them and discovered to his surprise that the entire family showed symptoms of scurvy.  The family had no fruit in their diets.  The doctor’s prescription was simple:  eat more fruit.  As the wealthy family didn’t prepare their own meals, this order was passed to the family’s household staff.

The family’s butler was in charge of both the pantry and all the male staff.  The butler instructed the errand boy that, from now on, he would need to procure fresh fruit on his trips to the market.  Further, the errand boy should check with the cook every morning before leaving for the market to learn what fruits would be needed that day.

Image by Sigurdas via WikiCommons

The errand boy tracked down the family cook and asked her for a list of what fruit would be needed for upcoming recipes.  The cook was unaccustomed to using fruit in dishes for the family, and did not have a list prepared.   Then, she recalled hearing about a new fad: the “fruit smoothie” consisting solely of blended fruit.   The cook told the errand boy not to worry about locating her and asking for a new list every day; instead, he could just pick up whatever fruit looked the freshest.  In return for that saved time, upon returning to the household, the errand boy would just need to put the fruit in a blender and then transfer the blended fruit to jugs in the cooler.

This arrangement was agreeable to all. Fruit was selected every day, blended into smoothies, stored in the cooler, and served to the family with their meals.  The family’s health improved and they enjoyed smoothies for many years.

Image via Wikipedia

After a while, though, the oldest daughter, Mary, developed a chronic rash.  The doctor ran some tests and declared that Mary was allergic to melons in the smoothie.  A kitchen maid was assigned to taste Mary’s smoothies for any hint of melon flavor before serving them, and Mary’s health was rapidly restored.

Image from http://freefoto.com

One night, the middle daughter, Edith, became deathly ill; she could barely breathe, her heart was pounding, and she was too dizzy to stand.  The family, scared they might lose their daughter, immediately contacted the doctor. The doctor was able to treat Edith and warned the family that this was another allergic reaction. Tests showed that the cause, in this case, was pineapple in the smoothie, and another exposure could kill Edith.  Even though pineapple was exceedingly rare, just having a kitchen maid taste the smoothies for pineapple wouldn’t be enough to guarantee the daughter’s safety.  Trace amounts could go undetected by a maid but prove fatal to poor Edith.

Luckily, the doctor knew of an easy-to-use test for even small amounts of pineapple.  Just mix a bit of the smoothie with a drop of this chemical, and the resulting color would warn of any danger.  It was easy to ask another kitchen maid to run the test before serving the smoothies.  Once again, the family was assured of their good health.

Apple pie

Image via Dan Parsons

A few months later, the family was hosting guests, who expressed a sudden desire for apple pie.  The cook was notified and nearly panicked at the request — the pantry was devoid of apples, and the market was closed for the day.  However, the cooler was full of smoothies, which had apple pieces in them.  All the kitchen staff jumped in to help, pouring the smoothies through strainers and picking through the chunks for pieces of apple.  Finally, enough suitable apple chunks were collected, the pie was successfully baked, and the entire family and their guests praised the kitchen’s quick thinking, hard work, and delicious results.

The apple pie was such a smash that the family soon began requesting all sorts of fruit-based desserts.  The kitchen staff already had their own duties, but the varying fruit desserts were so popular that the cook easily convinced the head of the household to hire more staff to handle the daily work of straining and separating the smoothies.  A whole room was set aside for just this activity, but it was worth it for the exquisite dishes the cook had learned to prepare for the family.

Even so, the family’s increasing demands made it hard to keep up.  Guests attended nearly every meal; banquets were held honoring war heroes and raising funds for charities.  Fruits were the main ingredients in novel dishes of all shapes, sizes, and flavors.  The cook pleaded with the family on behalf of the entire kitchen staff, and the family in turn acknowledged the overwhelming workload.  They hired a team of scientists and engineers to design and build a machine that could strain the smoothies, separate the juices, and in some cases even reconstitute the entire original fruit.

Electron microscope

Photo by dpape, 2009. Creative Commons Licence. http://www.flickr.com/photos/dpape/4057926815/

So, yet more rooms were set aside for this new team. They worked side by side with all the kitchen staff to duplicate the straining techniques perfected through years of training and practice.  Centrifuges were brought to separate the juice components. Microscopes analyzed how bits of peel could fit together. Meticulously detailed catalogs were written describing how pulps, cores, peels, and pits differ among farms just so that all the rebuilt fruit could exactly match the original.

It was difficult, painstaking work, requiring the intense focus of dozens of people for many years, but every day the team came a little bit closer to their holy grail: unblending the smoothie.

And for all those years, the errand boy did the same chore every day.  He returned from the market, dutifully dumped the entire basket of fresh fruit into a blender for half a minute, poured the resulting smoothie into jugs, and placed them in the cooler.

Blueberry smoothie

Image licensed under Creative Commons

If it’s so fragile that you can’t change it, then you must change it.

A graph of developer output vs the ability of a manual tester to test it

Graph of automated testing and manual testing

Cormac Herley of Microsoft Research recently published a paper titled, “So Long, and No Thanks for all the Externalities,” in which he argues that the common situation of users ignoring or bypassing computer security measures is economically rational, and that many of these security measures may hurt more than they help. He also suggests a calculus that can be used to find, if not a precise balance between policies that help and hurt’ at least the bounds for what options should be considered.

Herley makes several excellent points about how the economic cost of security policies is frequently ignored, and I believe that, within certain constraints, his suggested calculus is helpful for evaluating the benefits and costs of these policies. Further, I strongly suspect his broader conclusion is correct, and many security policies produce a net harm. Note, however, I said that the applicability of his calculus has constraints — Herley does not explicitly identify these constraints, and thus misapplies his own calculus. The examples Herley provides are faulty and cannot be used in support of his conclusion.

A quick summary of Herley’s calculus

Herley suggests all potential security policies be evaluated in terms of their cost compared to the maximum potential benefit they could possibly provide. He makes clear that the upper bound for this benefit is the total direct losses due to the particular type of attack that the security policy is supposed to mitigate. For example, the total losses a company suffers due to dictionary attacks on passwords may be $50 million. If this is the case, then a potential security policy intended to mitigate password attacks should have a total cost of no more than $50 million — even if you assume the policy is 100% effective, it would still be costing more than it saves.

The calculus works here. It assumes that implementing a proposed security policy will lower the total direct loss — in other words, if you are losing $50 million now to dictionary attacks, after putting the policy in place you will be losing somewhere between $0 and $50 million due to those attacks. $50 million is, thus, an effective upper bound for what a proposed policy could help.

Where Herley goes wrong

Herley makes his big mistake when he tries to work the other way, and discusses policies that are already in place. He claims that the economic cost of the extant policies should be less than the current losses in order to make sense. However, what was an upper bound when considering hypothetical policies becomes a lower bound for existing policies. Consider: If you are losing $50 million on dictionary attacks with your current password policies, making those policies more lax will increase your losses. It could be $100 million, or $1 billion, or more. $50 million is the minimum you lose when you loosen your security policies.

This is a critical mistake to make, and unfortunately Herley’s examples rely on it heavily. For example:

“However, the Paypal CISO [5] states that “Forty-one basis points is the total fraud number” on Paypal’s system. Thus 0.49% of Paypal’s Transaction Volume, or $290 million, would appear to upper bound all of the password related attacks. Given that Paypal had 70 million active users in 2008, all of the annual security advice should consume no more than $290/70 = $4.14
or about seventeen minutes of twice minimum wage time per year. But even this loose bound assumes that users are liable for the loss and can address it by following security advice.”

Ignore the transcription error here (it should be 0.41%, not 0.49%), as it’s beside the point. Herley argues that, since $290 million is the current amount of fraud, the current security measures should cost no more than that. However, that’s simply wrong. $290 million is the minimum PayPal loses despite all the security measures. Take away, say, the password complexity rules, and fraud may balloon into the billions. Herley’s calculus can only apply to new rules that PayPal is considering implementing but hasn’t yet.

How to salvage this

What I’ve said here doesn’t negate the usefulness of Herley’s calculus for proposed security policies. That still would work as Herley proposes. Evaluating extant security policies requires more work, however, and is fraught with its own difficulties. I’ll discuss those in a future post.

A few months ago I was debating with a friend over IM about how far one should take unit testing. He was of the opinion that one should only bother with unit tests on complex methods, and that unit testing the simple stuff was a waste of time. I pressed him on just how simple an app or a method had to be before it was too simple to bother unit testing, and he responded with the simplest example he could think of: “After all, you don’t really need to test ‘Hello, Worl!’” [sic].

That was the end of that argument.

Last week a colleague was taking the opportunity to revisit his development environment. In light of the availability of Windows Server 2008 R2, Win7, and Beta 1 of Visual Studio 2010, Eric was interested in pursuing a heavily virtualized setup. As he knew I am a proponent of doing all development (including the IDEs) in virtuals and that I had converted to a Hyper-V-based development environment, we started discussing what approach he might take. Eric travels a lot, so he’s opted to work entirely on mobile devices. His primary notebook is a beast: Core 2 Quad, 8GB RAM, 17″ 1920×1200 display, 1GB nVidia Quadro FX 3700m, all in a svelte 11 pound package (including power supply). You’d think it’d be great for the developer who wants to virtualize, or the conference presenter who wants to demonstrate the latest and greatest to a crowd.

Unfortunately, Microsoft’s professional-level offerings for virtualization on notebooks are nonexistent.

At first, Eric wanted to go the Hyper-V R2 route. He installed Server 2K8 R2, installed VS2010 in the host partition, and TFS2010/SQL2K8/MOSS2007 in virtuals. He had heard me complain about the graphics performance problems with Hyper-V in the past, but wanted to see for himself. Sure enough, Visual Studio ran quite slowly. However, as it was his first time using the beta, he didn’t know if the lack of speed was just because it was a beta, or if Hyper-V was the cause. Temporarily disabling the Hyper-V role caused a severalfold speedup in the application, going from painful to pleasant. Permanently fixing this would require running XP-level (or, ugh, VGA) drivers for his top-end video card. On top of this, Hyper-V completely disables all forms of sleep in Windows. This was not an acceptable solution to a mobile user.

Frustrated, he decided to resort to Virtual PC. It’s free and easy to use, but that idea was shot down when he realized that not only does Virtual PC not support multiprocessor guests (annoying, but something he could cope with), but it won’t run 64-bit guests either. Given that many of the latest Microsoft server releases (including Windows Server 2008 R2 itself) are 64-bit only, this was a dealbreaker.

What’s left? I suggested VMware Workstation 6.5. It supports multicore guests, 64bit guests, and letting the host computer sleep, all without painful graphics performance. It’s not free, but if you’re looking to get the job done, it’s the best solution. If you want free, VirtualBox is a good option, although not quite as polished as VMWare Workstation. If you want Microsoft, you’re out of luck.

Eric went with VMware Workstation.

Finally, I should note that Intel is releasing the next round of mobile cpus in 1Q2010. As they’ll be Nehalem chips, many of them should have improved hardware virtualization support that matches what we can get on the desktop today. While it won’t fix the Hyper-V sleep mode problem, it will at least alleviate the Hyper-V graphics performance problem.