Skip to main content
main-content
Top

About this book

This engaging and accessible text addresses the fundamental question: What Is Computer Science? The book showcases a set of representative concepts broadly connected by the theme of information security, for which the presentation of each topic can be treated as a "mini" lecture course, demonstrating how it allows us to solve real problems, as well as how it relates to other subjects. The discussions are further supported by numerous examples and practical hands-on exercises. Features: presents a concise introduction to the study of algorithms and describes how computers work; introduces the concepts of data compression, and error detection and correction; highlights the role of data structures; explores the topic of web-search; reviews both historic and modern cryptographic schemes, examines how a physical system can leak information and discusses the idea of randomness; investigates the science of steganography; provides additional supplementary material at an associated website.

Table of Contents

Foundations of Computer Science

Frontmatter

1. Compressing and Correcting Digital Media

Abstract
This chapter formalises the concept of positional number systems, a way of representing numbers in different bases. Using decimal as a starting point, the goal is to introduce binary: motivation is given by the need to store information on CDs and DVDs. For example, if we can represent any number using 0 and 1 then we can physically store such numbers as pits and lands on the surface of a CD.
Given the basics of number representation, the idea is then discuss the concepts of data compression, and error detection and correction. The running example of CDs and DVDs again provides motivation: both store vast amounts of data and are fairly robust to damage, but how do they do this from a technical point of view?
Daniel Page, Nigel Smart

2. Writing and Comparing Algorithms

Abstract
This Chapter presents a brief introduction to the study of algorithms, which are described from first-principles as formal sets of directions which describe how to perform a task. The goal is to demonstrate that algorithms are fundamental tools in Computer Science, but, on the other hand, no more complicated than a set of driving directions or cookery instructions.
A running example, namely the task of exponentiation (or powering) of one number by another, is used throughout; similar arithmetic is used within cryptographic schemes studied in later Chapters. By analysing how different algorithms for exponentiation behave, the Chapter shows we can compare them fairly against each other. Ultimately, this means we can select the most efficient algorithm for a given task.
Daniel Page, Nigel Smart

3. Playing Hide-and-Seek with Virus Scanners

Abstract
This chapter tries to bridge the gap between a fundamental topic in Computer Science, namely how computer processors execute programs, and a topic in information security, namely computer viruses.
It starts by introducing the concept of a fetch-decode-execute loop, and the implication of Harvard versus von Neumann architectures. By adopting a step-by-step approach and some very simple programs, the goal is show there is no magic involved: even complex, modern computer processors are based on fairly simple principles which everyone can understand. Using this background, the chapter explores a technical mechanism used by computer viruses to evade detection by virus scanners.
Specifically, the ability for a program to modify itself during execution (so-called self-modifying code) allows polymorphic viruses to hide their intentions from a scanner seeking to detect them.
Daniel Page, Nigel Smart

4. How Long Is a Piece of String?

Abstract
This chapter highlights the important role of data structures as a natural companion to algorithms. The premise is that computers can only store and operate on primitive data types, which essentially means just numbers; if we need another data type (e.g., to represent an image, or an email), we need to design it ourselves by enforcing a format on numerical data.
The running example used is that of strings, which in theory are just sequences of characters. Faced with a practical need to represent strings in memory however, a range of options exist. By examining two different string data structures and a range of algorithms that operate on them, various trade-offs in terms of efficiency are highlighted: ultimately, what seem be subtle or unimportant differences have a major impact.
Daniel Page, Nigel Smart

5. Demystifying Web-Search: the Mathematics of PageRank

Abstract
This chapter overviews Google web-search, one of the most ubiquitous and influential information systems available today: it stores and processes huge volumes of diverse information, scales to cope with huge numbers of users, yet produces high-quality results for each search query they type.
Aside from being so useful on a day-to-day basis, and easy to grasp, it represents an excellent example because the techniques used are based on fairly introductory Mathematics. Moving step-by-step through the application of basic graph and probability theory, the chapter acts as an introduction to both the data structures and algorithms that underpin the system as a whole.
Daniel Page, Nigel Smart

Examples from Information Security

Frontmatter

6. Using Short Programs to Make and Break Historical Ciphers

Abstract
Introductory books on cryptography often start by studying a range of historical ciphers. This is attractive for two reasons: first, because there is some interesting related history (their success or failure is often aligned with an important event), and, second, because they are simple enough to understand without a significant background in Mathematics.
This chapter has a similar premise. The goal is to demonstrate the two sides of cryptography: the constructive side where new schemes (in this case for encryption) are designed and used, and the destructive side where said schemes are attacked and broken. By using simple programs available on every UNIX-based computer, the chapter demonstrates how historical cryptanalysts were able to decrypt messages their sender would probably have thought as secure.
Daniel Page, Nigel Smart

7. Generation and Testing of Random Numbers

Abstract
The concept of randomness is quite abstract: what does random even mean, when can we describe a number as random, and what can random numbers be used for? Although we might have an intuition about such questions, the central role of random numbers in cryptography (e.g., as key material) demands care via more formal treatment.
This forms motivation for the chapter, whose goal is to provide answers in two main parts. First it explores metrics for differentiating between random versus non-random (or ”good” and ”bad” randomness); various simple experiments to generate and test random numbers provide practical evidence. It then introduces the concept of pseudo-randomness, which unlike real randomness can be generated by algorithms such as the Linear Congruential Generator (LCG).
Daniel Page, Nigel Smart

8. Safety in Numbers: Modern Cryptography from Ancient Arithmetic

Abstract
This chapter shifts the focus on cryptography from historical ciphers to their present day replacements. The content is split into two parts. First, the fundamentals of number theory (a branch of pure Mathematics) is introduced; the main topic is that of modular arithmetic, including a range of simple algorithms used to compute results. Using this starting point, it then describes two modern cryptographic schemes (the RSA encryption, and Diffie-Hellman key exchange schemes) that more or less all of us rely on every day. For example, both form components within the Secure Sockets Layer (SSL) which supports secure web-browsing.
Daniel Page, Nigel Smart

9. Hiding a Needle in a Haystack: Concealed Messages

Abstract
Whereas cryptography might try to ensure the confidentiality of data by encrypting it, the topic of steganography relates to hiding data: the premise is that for some applications, preventing the data being found is as important as preventing it being understood (or decrypted).
Although the topic alone is interesting, it allows an introduction to how computers represent and process images (e.g., those taken using a digital camera). Focusing on rasterised and vector images, two types of simple steganographic techniques are introduced: both allow one to embed a message within the image, while retaining the image and hence hiding the presence of a message.
Daniel Page, Nigel Smart

10. Picking Digital Pockets

Abstract
In comparison to previous Chapters which studied security of schemes for encryption based on their theoretical, Mathematical properties, this chapter shifts the perspective slightly. As well as reasoning in theory about how secure a scheme is, the premise is that we also need to worry about the physical implementation.
Two examples scenarios are examined. The first deals with guessing passwords: given the basic strategy of trying every password as a starting point, can one do better with some extra information (e.g., execution time) collected passively via a so-called side-channel? The second not only allows passive collection of information, but also active influence over execution: with the ability to tamper with a mobile telephone for example, can one guess the PIN number more easily?
Daniel Page, Nigel Smart
Additional information