Last week I posted about attempts to develop Robots to perform "human" tasks, and the challenges inherent in creating a machine capable of mimicking and co-existing with people.
A recent Economist article describes another effort to make computers exhibit a fundamental tenet of humanity- error. Cybersecurity experts at the University of Southern California (known as USC in some other parts of the country) are testing the security of computer networks by creating software programs to recreate the human errors that open up these networks to attack.
Human mistakes, as opposed to software or hardware failures or inadequacies, account for the majority of computer security breaches. Users fail to follow rules involving downloads, pop-ups and untrusted sites, and often intentionally disable security features on their computers. Fatigue and hunger can also make mistakes more likely to occur.
These scientists have created "cognitive agents," computer programs that simulate the behavior of users, managers, and IT staff, and particularly the ways in which these individuals compromise a computer network.
As mentioned in the piece, this project's focus on isolating and addressing the human aspect of security recognizes the fallacy of the old saying that "To err is human, but to foul things up completely requires a computer," and underscores the truth in the statement that "behind every error blamed on computers there are at least two human errors, including the error of blaming it on the computer." As Bruce Schneier observed many years ago, "security is a process, not a product," and process is ultimately fundamentally human.
So when assigning responsibility for the success or failure of your network, consider the timeless observation by Walt Kelly's Pogo (brought to my attention by my father many years ago): "We Have Met The Enemy and He Is Us."
"
No comments:
Post a Comment