Unit 1: F2F/guided learning – Historical background
Introduction to cybersecurity
Cybersecurity refers to the practice of protecting computer systems, networks, and other digital devices from unauthorized access, theft, damage and other forms of cyber-attacks. It involves a set of technologies, processes, and practices designed to safeguard computer systems and data from potential security breaches, including hacking, viruses, phishing, and other malicious activities. Cybersecurity is essential in today’s digital world to ensure the confidentiality, integrity, and availability of information and computing resources. It encompasses a wide range of activities, including risk assessment, threat management, incident response, and security awareness training.
What are the origins of cybersecurity? Cybercrime has evolved significantly since the first computers went online and started communicating with each other. The level of risk faced today is considerably higher than before, but these threats have always concerned computer users, and with good reason.
As technology improved, did cyber threats. The criminals in the industry continuously develop new ways to infiltrate systems and gather information. They may use malware or ransomware to attack companies or governmental institutions, from meat processing plants to fuel lines running through the country.
Cybersecurity risks throughout time – a chronicle
The origins of cybersecurity can be traced back to the early days of computer technology, when researchers and engineers first began to develop electronic computers and networks. As early as the 1950s, computer scientists and engineers recognized the need for security measures to protect sensitive information from unauthorized access and malicious attacks.
One of the earliest examples of a computer security breach occurred in the early 1970s, when a researcher named Robert Morris developed a program that could exploit vulnerabilities in the UNIX operating system to gain unauthorized access to other computers on the network. This incident, known as the Morris Worm, demonstrated the need for more robust security measures to protect against such attacks.
In the 1980s and 1990s, as computer networks began to proliferate, the need for cybersecurity became increasingly pressing. Hackers and cybercriminals began to develop more sophisticated techniques for attacking computer systems, and governments and businesses began to invest in more advanced security measures to protect their data and assets.
In the years since, the field of cybersecurity has continued to evolve, with new threats and challenges emerging on a regular basis. Today, cybersecurity is a critical concern for organizations and individuals around the world, and the field continues to grow and develop in response to new threats and technologies.
Counting from the 1940s until today, the continued process and development of technology brought cybercrime and cybersecurity to what they are and what we know today (Chadd, 2020):
The 1940s: The time before crime
1943 was countered as the year when the first digital computer was created. For several decades to follow, people had limited ways to use computers in a criminal or risky manner, as few of them were located around the world. Most were very large, very noisy, and difficult to use.
With no interconnecting network, threats were nearly non-existing, creating a secure environment.
Later in the decade, the theory of viruses occurred, with John von Neumann believing in a type of "mechanical organism" that could cause damage.
The 1950s: The phone phreaks
people who were interested in how phones worked. They attempted to hijack the protocols that enabled engineers to work on the network from a distance, enabling people to make no-cost calls and reduce tolls for long-distance calling.
The 1960s: All quiet on the Western Front
the decade where the term hacking was developed. It wasn't related to computers. It was coined when a group hacked the MIT Tech Model Railroad Club high-tech train sets. They wanted to adjust their functionality.
Hacking and gaining access didn't seem like "big business" in these early years. In fact, these early hacking events aimed to gain access to systems. No political or commercial benefits existed, rather than that early hacking was more about causing some trouble to see if it could be done.
The 1970s: Computer security is born
cybersecurity began with a project called The Advanced Research Projects Agency Network (ARPANET), a connectivity network developed before the internet.
Bob Thomas determined it was possible for a computer program to move over a network. He developed the program to move between the Tenex terminals on ARPANET. A program which he called Creeper. A program to carry and print a simple message "I'm THE CREEPER: CATCH ME IF YOU CAN."
This sparked a lot of interest, some concern and made a man named Ray Tomlinson develop a new program called Reaper. Tomlinson, who gained fame for his development of email, developed Reaper to chase and delete Creeper.
Reaper is easily the first example of an antivirus software program. It was also called a self-replicating program. That made Reaper the world's first computer worm.
At this time, computer technology continued to grow and expand. Most networks relied on telephone systems for connectivity. That placed a new, higher level of demand on ways to secure networks. Every piece of hardware connected to the network created a new entry point. These were vulnerabilities in the network.
The 1980s: From ARPANET to the internet
An increase in high-profile attacks, including those at National CSS, AT&T, and Los Alamos National Laboratory. The movie War Games, in which a rogue computer program takes over nuclear missile systems under the guise of a game, was released in 1983. In the same year, the terms Trojan Horse and Computer Virus were first used.
At the time of the Cold War, the threat of cyber espionage evolved. In 1985, The US Department of Defence published the Trusted Computer System Evaluation Criteria (aka The Orange Book) that provided guidance on:
- assessing the degree of trust that can be placed in software that processes classified or other sensitive information;
- what security measures manufacturers needed to build into their commercial products.
Security started to be taken more seriously. Savvy users quickly learned to monitor the command.com file size, having noticed that an increase in size was the first sign of potential infection. Cybersecurity measures incorporated this thinking, and a sudden reduction in free operating memory remains a sign of attack to this day.
The 1990s: The world goes online
- The first polymorphic viruses were created (code that mutates while keeping the original algorithm intact to avoid detection)
- The British computer magazine PC Today released an edition with a free disc that 'accidentally' contained the DiskKiller virus, infecting tens of thousands of computers
- EICAR (European Institute for Computer Antivirus Research) was established
Early antivirus was purely signature-based, comparing binaries on a system with a database of virus 'signatures'. This meant that early antivirus produced many false positives and used a lot of computational power – which frustrated users as productivity slowed.
The 2000s: Threats diversify and multiply
With the internet available in more homes and offices across the globe, cybercriminals had more devices and software vulnerabilities to exploit than ever before. And, as more and more data was being kept digitally, there was more to plunder.
The 2010s: The next generation
The 2010s saw many high-profile breaches and attacks starting to impact countries' national security and cost businesses millions.
The increasing connectedness and the digitisation of many aspects of life continued to offer cybercriminals new opportunities to exploit. Cybersecurity tailored specifically to the needs of businesses became more prominent, and in 2011, Avast launched its first business product.
Next-gen cybersecurity uses different approaches to increase the detection of new and unprecedented threats while also reducing the number of false positives.
Ethical hacking origins
Ethical hacking focuses on detecting vulnerabilities in an application, system, or organisation’s (government, private and commercial) infrastructure that an attacker can use to exploit an individual or organisation. It prevents cyber-attacks and security breaches by lawfully hacking into the systems and looking for weak points.
The term ethical hacking was a creation by IBM executive John Patrick in 1990. The concept and application of the process were known, but a term to define it did not exist before. When hacking became relevant in the 1960s, it was more like a compliment for excellent computing skills.
But soon, the term acquired a negative connotation due to increasing crime rates. By the 1980s, many movies came out based on the concept of hacking, making it a mass term. By 2000, the commercialisation of hacking had begun making it a career opportunity for many.
An ethical hacker is a person who hacks into a computer network to test or evaluate its security rather than with malicious or criminal intent.