Vulnerability (computing) - Wikipedia, the free encyclopedia. In computer security, a vulnerability is a weakness which allows an attacker to reduce a system's information assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw. In this frame, vulnerability is also known as the attack surface. Vulnerability management is the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities. The use of vulnerability with the same meaning of risk can lead to confusion. The risk is tied to the potential of a significant loss. Then there are vulnerabilities without risk: for example when the affected asset has no value. Qualys Vulnerability Management. Zero-Day and Patch impact predictions.
A vulnerability with one or more known instances of working and fully implemented attacks is classified as an exploitable vulnerability . The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled. Security and Vulnerability Assessment. Click through for a 10-step security and vulnerability assessment plan outlined by Info-Tech Research Group.April 2. 01. 0 National Information Assurance Glossary. Between them SP 8. In computer security, a weakness in the physical layout, organization, procedures, personnel, management, administration, hardware or softwarethat may be exploited to cause harm to the ADP system or activity. In computer security, any weakness or flaw existing in a system. The attack or harmful event, or the opportunity available to a threat agent to mount that attack. Matt Bishop and Dave Bailey. The system computes through the application of state transitions that change the state of the system. All states reachable from a given initial state using a set of state transitions fall into the class of authorized or unauthorized, as defined by a security policy. In this paper, the definitions of these classes and transitions is considered axiomatic. A vulnerable state is an authorized state from which an unauthorized state can be reached using authorized state transitions. A compromised state is the state so reached. An attack is a sequence of authorized state transitions which end in a compromised state. By definition, an attack begins in a vulnerable state. A vulnerability is a characterization of a vulnerable state which distinguishes it from all non- vulnerable states. If generic, the vulnerability may characterize many vulnerable states; if specific, it may characterize only one.. National Information Assurance Training and Education Center defines vulnerability: . A weakness in system security procedures, hardware design, internal controls, etc. A weakness in the physical layout, organization, procedures, personnel, management, administration, hardware, or software that may be exploited to cause harm to the ADP system or activity. The presence of a vulnerability does not in itself cause harm; a vulnerability is merely a condition or set of conditions that may allow the ADP system or activity to be harmed by an attack. An assertion primarily concerning entities of the internal environment (assets); we say that an asset (or class of assets) is vulnerable (in some way, possibly involving an agent or collection of agents); we write: V(i,e) where: e may be an empty set. Susceptibility to various threats. A set of properties of a specific internal entity that, in union with a set of properties of a specific external entity, implies a risk. The characteristics of a system which cause it to suffer a definite degradation (incapability to perform the designated mission) as a result of having been subjected to a certain level of effects in an unnatural (manmade) hostile environment. Vulnerability and risk factor models. The result can potentially compromise the confidentiality, integrity or availability of resources (not necessarily the vulnerable one) belonging to an organization and/or others parties involved (customers, suppliers). The so- called CIA triad is the basis of Information Security. An attack can be active when it attempts to alter system resources or affect their operation, compromising integrity or availability. These countermeasures are also called Security controls, but when applied to the transmission of information they are called security services. The computer user stores the password on the computer where a program can access it. Users re- use passwords between many programs and websites. For example, operating systems with policies such as default permit grant every program and every user full access to the entire computer. After visiting those websites, the computer systems become infected and personal information will be collected and passed on to third party individuals. The software bug may allow an attacker to misuse an application. Programs that do not check user input can allow unintended direct execution of commands or SQL statements (known as Buffer overflows, SQL injection or other non- validated inputs). Social engineering is an increasing security concern. Vulnerability consequences. The fact that IT managers, or upper management, can (easily) know that IT systems and applications have vulnerabilities and do not perform any action to manage the IT risk is seen as a misconduct in most legislations. Privacy law forces managers to act to reduce the impact or likelihood of that security risk. Information technology security audit is a way to let other independent people certify that the IT environment is managed properly and lessen the responsibilities, at least having demonstrated the good faith. Penetration test is a form of verification of the weakness and countermeasures adopted by an organization: a White hat hacker tries to attack an organization's information technology assets, to find out how easy or difficult it is to compromise the IT security. As reported by The Tech Herald in August 2. It is most commonly referred to as . Usually, vulnerability information is discussed on a mailing list or published on a security web site and results in a security advisory afterward. The time of disclosure is the first date a security vulnerability is described on a channel where the disclosed information on the vulnerability has to fulfill the following requirement: The information is freely available to the public. The vulnerability information is published by a trusted and independent channel/source. The vulnerability has undergone analysis by experts such that risk rating information is included upon disclosure. Identifying and removing vulnerabilities. Though these tools can provide an auditor with a good overview of possible vulnerabilities present, they can not replace human judgment. Relying solely on scanners will yield false positives and a limited- scope view of the problems present in the system. Vulnerabilities have been found in every major operating system. The only way to reduce the chance of a vulnerability being used against a system is through constant vigilance, including careful system maintenance (e. See Social engineering (security). Four examples of vulnerability exploits: an attacker finds and uses an overflow weakness to install malware to export sensitive data; an attacker convinces a user to open an email message with attached malware; an insider copies a hardened, encrypted program onto a thumb drive and cracks it at home; a flood damages one's computer systems installed at ground floor. Software vulnerabilities. Air Force Software Protection Initiative. Taylor & Francis Group, 2. ISBN 9. 78- 1- 4. ISO/IEC, . 4. 00. April 2. 01. 0^. A Critical Analysis of Vulnerability Taxonomies. Technical Report CSE- 9. Department of Computer Science at the University of California at Davis, September 1. Schou, Corey (1. 99. Handbook of INFOSEC Terms, Version 2. CD- ROM (Idaho State University & Information Systems Security Organization)^NIATEC Glossary^ISACA THE RISK IT FRAMEWORK (registration required)Archived July 5, 2. Wayback Machine.^ ab. Wright, Joe; Harmening, Jim (2. Computer and Information Security Handbook. Morgan Kaufmann Publications. ISBN 9. 78- 0- 1. Computer and Information Security Handbook. Morgan Kaufmann Publications. ISBN 9. 78- 0- 1. The COAST Laboratory Department of Computer Sciences, Purdue University. Cite. Seer. X: 1. Why Cryptosystems Fail. Technical report, University Computer Laboratory, Cam- bridge, January 1. Neil Schlager. When Technology Fails: Significant Technological Disasters, Accidents, and Failures of the Twentieth Century. Gale Research Inc., 1. Hacking: The Art of Exploitation Second Edition^Kiountouzis, E. Information systems security: facing the information society of the 2. London: Chapman & Hall, Ltd. Computer and Information Security Handbook. Morgan Kaufmann Publications. ISBN 9. 78- 0- 1. Retrieved 1. 2 January 2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
September 2016
Categories |