to help enterprise security across Europe
The resource centre for busy senior executives seeking the latest insights into IT Compliance & Privacy issues for major organizations
 
 
sarbaines oxley ofcom communications regulator
Latest Resources      data protection register
compliance resources privacy resource center

Breaking Global News
Global Compliance and Privacy News
- Breaking News, updated every 30 minutes
•   Compliance, Privacy and Security
•  Money Laundering
•  Phishing
•  Regulatory Issues
•  SOX, Basel 2, MiFID


You Tell Us:
S
S
L

T
E
C
H
N
O
L
O
G
Y
We use SSL Technology for web data entry points:

Always
Sometimes
Never
What is SSL?

News
Are Smartphones Endangering Security? - Wick Hill
Dealing with Internet Security Threats - Ian Kilpatrick
How the New EU Rules on Data Export Affect Companies in and Outside the EU - Thomas Helbing
Farmers' Data Leak Highlights Old Technology Use - Wick Hill
Saving Money with SFTP - Wick Hill
UK Information Commissioner targets firm selling vetting data - Eversheds e80
12 Key Steps to Internet Security - Wick Hill
Telephone Monitoring Legality in the UK - Dechert
Firewall or UTM - Wick Hill
UK Information Commissioner demands mobile device encryption - Eversheds e80
Data loss - liability, reputation and mitigation of risk - Eversheds e80
Phorm, Webwise and OIX - BCS Security Forum
The challenges of PCI DSS compliance - Thales, Russell Fewing
"Quality" Data Vendor Spams us! Editor astounded!
National Gateway Security Survey 2008 - Wick Hill
Unified Threat Management - Watchguard Technologies

news archives
:
0 | 1 | 2 | 3 | 4 | 5 |
6 | 7 | 8 | 9 | 10 | 11 |
12 | 13
[What is this?]

Industry Blogs
Tim Berners Lee's Blog
Tim Callan's SSL Blog
Davis Wright Tremaine's Privacy & Security Law Blog
Emergent Chaos Blog
Michael Farnum's Blog
Phillip Hallam-Baker's Blog - The dotFuture Manifesto: Internet Crime, Web Services, Philosophy
Stuart King's Security and Risk Management Blog
David Lacey's IT Security Blog
Metasploit Official Blog
Jeff Pettorino's Security Convergence Blog
Jeff Richards's Demand Insights Blog
David Rowe's Risk ManagementBlog
Bruce Schneier's Security Blog
Larry Seltzer's Security Weblog
Mike Spinney's Private Communications Blog
Richard Steinnon's Threat Chaos Blog
The TechWeb Blog
Tim Trent's Marketing by Permission Blog
Rebecca Wong 's DP Thinker Blog

Newsletters
23 February Newsletter
Newsletter Archives are located in "News"

Industry Update
Internet Security Intelligence Briefing - November 2005
Find out the latest trends in e-commerce, web usage & the latest threats from adware/Spyware

Reports
Phorm, Webwise and OIX
- BCS Security Forum

'The Any Era has Arrived, and Everyione has Noticed' - Stratton Sclavos - VeriSign
Identity Security - Time to Share
Malicious code threats - iDefense
Public Alerts - updated as they happen from Stopbadware.org
Public Alerts - updated as they happen from Websense
Public Advisories - updated as they happen, from iDefense
Phoraging - Privacy invasion through the Semantic web: a special report by Mike Davies of VeriSign

Legislation
Privacy Laws & Business International E-news, Issue 57
Privacy Laws & Business UNited Kingdom E-news, Issue 60

Security Reviews
February 2007 - VeriSign Security Review
The security review archive is here

Case Studies
Finance Industry
Case Study Example

A case study on a Finance industry company.

White Papers
VeriSign® Intelligent Infrastructure for the 21st Century
VeriSign® Intelligent Infrastructure for Security
VeriSign® Intelligent Infrastructure: An Overview
Identity Protection Fraud Detection Service - description of the service
Life of a Threat - Video on Threat Management Lifecycle
Optimizing Enterprise Information Security Compliance - Dealing with all the audits
For a full list of all whitepapers, visit our Whitepaper library

Legal Notices
Privacy Policy
Terms of use

basel 2 sarbanes oxley
    legislation
data controller notification binding corporate rules BCR data transfer third countries third part data transfer basel 2 regualtor regulation regulate FSA banking network security RSA encryptin algorithm Bits sacked bank staff
Blogs compliance Reports compliancy Legislation Data Protection Case Studies data privacy White Papers data protection act News information commissioner Events security standards Links information security iDefense
Retail Solutions

Bruce Schneier's Security Blog

compliance and privacy

Current News Updates

Bruce Schneier's Security Blog

Bruce SchneierBruce Schneier is an internationally renowned security technologist and author. Described by The Economist as a "security guru," Schneier is best known as a refreshingly candid and lucid security critic and commentator. When people want to know how security really works, they turn to Schneier.

His first bestseller, Applied Cryptography , explained how the arcane science of secret codes actually works, and was described by Wired as "the book the National Security Agency wanted never to be published." His book on computer and network security, Secrets and Lies , was called by Fortune "[a] jewel box of little surprises you can actually use." His current book, Beyond Fear , tackles the problems of security from the small to the large: personal safety, crime, corporate security, national security.

Schneier also publishes a free monthly newsletter, Crypto-Gram , with over 100,000 readers. In its seven years of regular publication, Crypto-Gram has become one of the most widely read forums for free-wheeling discussions, pointed critiques, and serious debate about security. As head curmudgeon at the table, Schneier explains, debunks, and draws lessons from security stories that make the news. Regularly quoted in the media, Schneier has written op ed pieces for several major newspapers, and has testified on security before the United States Congress on many occasions.

Bruce Schneier is the founder and CTO of Counterpane Internet Security, Inc. and has a biograph on Wikipedia


  • Another Interview

    I was interviewed by MinnPost.


  • Conspiracy Theories and the NSA

    I've recently seen two articles speculating on the NSA's capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking -- and I have no idea whether any of it is true or not -- but it's a good illustration of what happens when trust in a public institution fails.

    The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress. Top-secret documents provided by Edward Snowden, and reported on by the Guardian and other newspapers, repeatedly show that the NSA's surveillance systems are monitoring the communications of American citizens. The DEA has used this information to apprehend drug smugglers, then lied about it in court. The IRS has used this information to find tax cheats, then lied about it. It's even been used to arrest a copyright violator. It seems that every time there is an allegation against the NSA, no matter how outlandish, it turns out to be true.

    Guardian reporter Glenn Greenwald has been playing this well, dribbling the information out one scandal at a time. It's looking more and more as if the NSA doesn't know what Snowden took. It's hard for someone to lie convincingly if he doesn't know what the opposition actually knows.

    All of this denying and lying results in us not trusting anything the NSA says, anything the president says about the NSA, or anything companies say about their involvement with the NSA. We know secrecy corrupts, and we see that corruption. There's simply no credibility, and -- the real problem -- no way for us to verify anything these people might say.

    It's a perfect environment for conspiracy theories to take root: no trust, assuming the worst, no way to verify the facts. Think JFK assassination theories. Think 9/11 conspiracies. Think UFOs. For all we know, the NSA might be spying on elected officials. Edward Snowden said that he had the ability to spy on anyone in the U.S., in real time, from his desk. His remarks were belittled, but it turns out he was right.

    This is not going to improve anytime soon. Greenwald and other reporters are still poring over Snowden's documents, and will continue to report stories about NSA overreach, lawbreaking, abuses, and privacy violations well into next year. The "independent" review that Obama promised of these surveillance programs will not help, because it will lack both the power to discover everything the NSA is doing and the ability to relay that information to the public.

    It's time to start cleaning up this mess. We need a special prosecutor, one not tied to the military, the corporations complicit in these programs, or the current political leadership, whether Democrat or Republican. This prosecutor needs free rein to go through the NSA's files and discover the full extent of what the agency is doing, as well as enough technical staff who have the capability to understand it. He needs the power to subpoena government officials and take their sworn testimony. He needs the ability to bring criminal indictments where appropriate. And, of course, he needs the requisite security clearance to see it all.

    We also need something like South Africa's Truth and Reconciliation Commission, where both government and corporate employees can come forward and tell their stories about NSA eavesdropping without fear of reprisal.

    Yes, this will overturn the paradigm of keeping everything the NSA does secret, but Snowden and the reporters he's shared documents with have already done that. The secrets are going to come out, and the journalists doing the outing are not going to be sympathetic to the NSA. If the agency were smart, it'd realize that the best thing it could do would be to get ahead of the leaks.

    The result needs to be a public report about the NSA's abuses, detailed enough that public watchdog groups can be convinced that everything is known. Only then can our country go about cleaning up the mess: shutting down programs, reforming the Foreign Intelligence Surveillance Act system, and reforming surveillance law to make it absolutely clear that even the NSA cannot eavesdrop on Americans without a warrant.

    Comparisons are springing up between today's NSA and the FBI of the 1950s and 1960s, and between NSA Director Keith Alexander and J. Edgar Hoover. We never managed to rein in Hoover's FBI -- it took his death for change to occur. I don't think we'll get so lucky with the NSA. While Alexander has enormous personal power, much of his power comes from the institution he leads. When he is replaced, that institution will remain.

    Trust is essential for society to function. Without it, conspiracy theories naturally take hold. Even worse, without it we fail as a country and as a culture. It's time to reinstitute the ideals of democracy: The government works for the people, open government is the best way to protect against government abuse, and a government keeping secrets from its people is a rare exception, not the norm.

    This essay originally appeared on TheAtlantic.com.


  • The NSA's Cryptographic Capabilities

    The latest Snowden document is the US intelligence "black budget." There's a lot of information in the few pages the Washington Post decided to publish, including an introduction by Director of National Intelligence James Clapper. In it, he drops a tantalizing hint: "Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic."

    Honestly, I'm skeptical. Whatever the NSA has up its top-secret sleeves, the mathematics of cryptography will still be the most secure part of any encryption system. I worry a lot more about poorly designed cryptographic products, software bugs, bad passwords, companies that collaborate with the NSA to leak all or part of the keys, and insecure computers and networks. Those are where the real vulnerabilities are, and where the NSA spends the bulk of its efforts.

    This isn't the first time we've heard this rumor. In a WIRED article last year, longtime NSA-watcher James Bamford wrote:

    According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US.

    We have no further information from Clapper, Snowden, or this other source of Bamford's. But we can speculate.

    Perhaps the NSA has some new mathematics that breaks one or more of the popular encryption algorithms: AES, Twofish, Serpent, triple-DES, Serpent. It wouldn't be the first time this happened. Back in the 1970s, the NSA knew of a cryptanalytic technique called "differential cryptanalysis" that was unknown in the academic world. That technique broke a variety of other academic and commercial algorithms that we all thought secure. We learned better in the early 1990s, and now design algorithms to be resistant to that technique.

    It's very probable that the NSA has newer techniques that remain undiscovered in academia. Even so, such techniques are unlikely to result in a practical attack that can break actual encrypted plaintext.

    The naive way to break an encryption algorithm is to brute-force the key. The complexity of that attack is 2n, where n is the key length. All cryptanalytic attacks can be viewed as shortcuts to that method. And since the efficacy of a brute-force attack is a direct function of key length, these attacks effectively shorten the key. So if, for example, the best attack against DES has a complexity of 239, that effectively shortens DES's 56-bit key by 17 bits.

    That's a really good attack, by the way.

    Right now the upper practical limit on brute force is somewhere under 80 bits. However, using that as a guide gives us some indication as to how good an attack has to be to break any of the modern algorithms. These days, encryption algorithms have, at a minimum, 128-bit keys. That means any NSA cryptanalytic breakthrough has to reduce the effective key length by at least 48 bits in order to be practical.

    There's more, though. That DES attack requires an impractical 70 terabytes of known plaintext encrypted with the key we're trying to break. Other mathematical attacks require similar amounts of data. In order to be effective in decrypting actual operational traffic, the NSA needs an attack that can be executed with the known plaintext in a common MS-Word header: much, much less.

    So while the NSA certainly has symmetric cryptanalysis capabilities that we in the academic world do not, converting that into practical attacks on the sorts of data it is likely to encounter seems so impossible as to be fanciful.

    More likely is that the NSA has some mathematical breakthrough that affects one or more public-key algorithms. There are a lot of mathematical tricks involved in public-key cryptanalysis, and absolutely no theory that provides any limits on how powerful those tricks can be.

    Breakthroughs in factoring have occurred regularly over the past several decades, allowing us to break ever-larger public keys. Much of the public-key cryptography we use today involves elliptic curves, something that is even more ripe for mathematical breakthroughs. It is not unreasonable to assume that the NSA has some techniques in this area that we in the academic world do not. Certainly the fact that the NSA is pushing elliptic-curve cryptography is some indication that it can break them more easily.

    If we think that's the case, the fix is easy: increase the key lengths.

    Assuming the hypothetical NSA breakthroughs don't totally break public-cryptography -- and that's a very reasonable assumption -- it's pretty easy to stay a few steps ahead of the NSA by using ever-longer keys. We're already trying to phase out 1024-bit RSA keys in favor of 2048-bit keys. Perhaps we need to jump even further ahead and consider 3072-bit keys. And maybe we should be even more paranoid about elliptic curves and use key lengths above 500 bits.

    One last blue-sky possibility: a quantum computer. Quantum computers are still toys in the academic world, but have the theoretical ability to quickly break common public-key algorithms -- regardless of key length -- and to effectively halve the key length of any symmetric algorithm. I think it extraordinarily unlikely that the NSA has built a quantum computer capable of performing the magnitude of calculation necessary to do this, but it's possible. The defense is easy, if annoying: stick with symmetric cryptography based on shared secrets, and use 256-bit keys.

    There's a saying inside the NSA: "Cryptanalysis always gets better. It never gets worse." It's naive to assume that, in 2013, we have discovered all the mathematical breakthroughs in cryptography that can ever be discovered. There's a lot more out there, and there will be for centuries.

    And the NSA is in a privileged position: It can make use of everything discovered and openly published by the academic world, as well as everything discovered by it in secret.

    The NSA has a lot of people thinking about this problem full-time. According to the black budget summary, 35,000 people and $11 billion annually are part of the Department of Defense-wide Consolidated Cryptologic Program. Of that, 4 percent -- or $440 million -- goes to "Research and Technology."

    That's an enormous amount of money; probably more than everyone else on the planet spends on cryptography research put together. I'm sure that results in a lot of interesting -- and occasionally groundbreaking -- cryptanalytic research results, maybe some of it even practical.

    Still, I trust the mathematics.

    This essay originally appeared on Wired.com.

    EDITED TO ADD: That was written before I could talk about this.

    EDITED TO ADD: The Economist expresses a similar sentiment.


  • The NSA Is Breaking Most Encryption on the Internet

    The new Snowden revelations are explosive. Basically, the NSA is able to decrypt most of the Internet. They're doing it primarily by cheating, not by mathematics.

    It's joint reporting between the Guardian, the New York Times, and ProPublica.

    I have been working with Glenn Greenwald on the Snowden documents, and I have seen a lot of them. These are my two essays on today's revelations.

    Remember this: The math is good, but math has no agency. Code has agency, and the code has been subverted.

    EDITED TO ADD (9/6): Someone somewhere commented that the NSA's "groundbreaking cryptanalytic capabilities" could include a practical attack on RC4. I don't know one way or the other, but that's a good speculation.

    EDITED TO ADD (9/6): Relevant Slashdot and Reddit threads.


  • The Effect of Money on Trust

    Money reduces trust in small groups, but increases it in larger groups. Basically, the introduction of money allows society to scale.

    The team devised an experiment where subjects in small and large groups had the option to give gifts in exchange for tokens.

    They found that there was a social cost to introducing this incentive. When all tokens were "spent", a potential gift-giver was less likely to help than they had been in a setting where tokens had not yet been introduced.

    The same effect was found in smaller groups, who were less generous when there was the option of receiving a token.

    "Subjects basically latched on to monetary exchange, and stopped helping unless they received immediate compensation in a form of an intrinsically worthless object [a token].

    "Using money does help large societies to achieve larger levels of co-operation than smaller societies, but it does so at a cost of displacing normal of voluntary help that is the bread and butter of smaller societies, in which everyone knows each other," said Prof Camera.

    But he said that this negative result was not found in larger anonymous groups of 32, instead co-operation increased with the use of tokens.

    "This is exciting because we introduced something that adds nothing to the economy, but it helped participants converge on a behaviour that is more trustworthy."

    He added that the study reflected monetary exchange in daily life: "Global interaction expands the set of trade opportunities, but it dilutes the level of information about others' past behaviour. In this sense, one can view tokens in our experiment as a parable for global monetary exchange."


  • Journal of Homeland Security and Emergency Management

    I keep getting alerts of new issues, but there are rarely articles I find interesting.


  • Human-Machine Trust Failures

    I jacked a visitor's badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they're enabled when you check in at building security. You're supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

    I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

    The person after me had problems, though. Some part of the system knew something was wrong, and wouldn't let her out. Eventually, the guard had to manually override something.

    My point in telling this story is not to demonstrate how I beat the EEOB's security -- I'm sure the badge was quickly deactivated and showed up in some missing-badge log next to my name -- but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn't adequately explain it to the guards. The guards knew it but didn't know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

    In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state -- how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn't happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time -- that requires understanding on both parts. Each time things go wrong, and the machine portion doesn't communicate well, the human portion trusts it a little less.

    This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system -- especially the machine part -- better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We've all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: "The computer is always right."

    Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they're easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they're bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That's why "if you see something, say something" fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

    The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That's the best way to give humans the trust and understanding they need in the machine part of any security system.

    This essay previously appeared in IEEE Security & Privacy.


  • SHA-3 Status

    NIST's John Kelsey gave an excellent talk on the history, status, and future of the SHA-3 hashing standard. The slides are online.


  • Business Opportunities in Cloud Security

    Bessemer Venture Partners partner David Cowan has an interesting article on the opportunities for cloud security companies.

    Richard Stiennnon, an industry analyst, has a similar article.

    And Zscaler comments on a 451 Research report on the cloud security business.


  • Syrian Electronic Army Cyberattacks

    The Syrian Electronic Army attacked again this week, compromising the websites of the New York Times, Twitter, the Huffington Post, and others.

    Political hacking isn't new. Hackers were breaking into systems for political reasons long before commerce and criminals discovered the Internet. Over the years, we've seen U.K. vs. Ireland, Israel vs. Arab states, Russia vs. its former Soviet republics, India vs. Pakistan, and US vs. China.

    There was a big one in 2007, when the government of Estonia was attacked in cyberspace following a diplomatic incident with Russia. It was hyped as the first cyberwar, but the Kremlin denied any Russian government involvement. The only individuals positively identified were young ethnic Russians living in Estonia.

    Poke at any of these international incidents, and what you find are kids playing politics. The Syrian Electronic Army doesn't seem to be an actual army. We don't even know if they're Syrian. And -- to be fair -- I don't know their ages. Looking at the details of their attacks, it's pretty clear they didn't target the New York Times and others directly. They reportedly hacked into an Australian domain name registrar called Melbourne IT, and used that access to disrupt service at a bunch of big-name sites.

    We saw this same tactic last year from Anonymous: hack around at random, then retcon a political reason why the sites they successfully broke into deserved it. It makes them look a lot more skilled than they actually are.

    This isn't to say that cyberattacks by governments aren't an issue, or that cyberwar is something to be ignored. Attacks from China reportedly are a mix of government-executed military attacks, government-sponsored independent attackers, and random hacking groups that work with tacit government approval. The US also engages in active cyberattacks around the world. Together with Israel, the US employed a sophisticated computer virus (Stuxnet) to attack Iran in 2010.

    For the typical company, defending against these attacks doesn't require anything different than what you've been traditionally been doing to secure yourself in cyberspace. If your network is secure, you're secure against amateur geopoliticians who just want to help their side.

    This essay originally appeared on the Wall Street Journal's website.


  • Our Newfound Fear of Risk

    We're afraid of risk. It's a normal part of life, but we're increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren't free. They cost money, of course, but they cost other things as well. They often don't provide the security they advertise, and -- paradoxically -- they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

    Three examples:

    1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it's not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes -- a wrong address on a warrant, a misunderstanding -- result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.

    2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.

    3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade -- including the wars in Iraq and Afghanistan -- money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

    There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

    There's a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world -- among the wealthier of us who live in peaceful Western countries -- our lives have become safer.

    Our notions of risk are not absolute; they're based more on how far they are from whatever we think of as "normal." So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

    Some of this fear results from imperfect risk perception. We're bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are -- and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

    Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn't approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn't discipline -- no matter how benign the infraction -- the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

    We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There's a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they're arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved -- medicine is the big one, but other sciences as well -- we become better at mitigating and recovering from those sorts of risks.

    Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn't able to figure out how to topple structures constructed under some new and safer building code, and an automobile won't invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

    When you implement measures to mitigate the effects of the random risks of the world, you're safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you'd expect -- and you also get more side effects, because we all adapt.

    We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

    This essay previously appeared on Forbes.com.

    EDITED TO ADD (8/5): SlashDot thread.


  • 1983 Article on the NSA

    The moral is that NSA surveillance overreach has been going on for a long, long time.



Complete list of Bloggers featured by Compliance and Privacy:


Please note: Blogs contain items that are the responsibility of the author and are presented "as is" with no endorsement from, nor editing by, nor approval from complianceandprivacy.com. The copyright owner for the blog items is that of the originator of the item. Each blog item is reproduced from the relevant feed from the originating blog, either in full or in part as that feed itself determines. All blog item header links lead directly to those items on the original blog. Blogs are dynamic. We offer them in good faith, but, where the content is outside our control we cannot be responsible for their errors, omissions or other conduct. Some of the links on this page remain on this site, others go to other sites; that is the nature of a blog. When you leave this site you are encouraged to be aware of the privacy policy of the new site before leaving personal data there.


 


This site is independent of all its sources
The contents of the site are sourced from across the industry. All copyrights are acknowledged.