## Cloud Encryption Threat Map

The Cloud has gained quite a bit of popularity within the past decade such that many companies can roll out their own or one hosted by a cloud provider with relative ease. However with this new world come new threats and it is important that organizations adequately model their networks, data and possible threats to ensure sensitive data is kept secure. Kenn White was kind enough to create this threat scenarios mind map and I thought it was worth sharing as it does a great job of showing scenarios that different security technologies help protect against. Advertisements
Posted in Cloud, Security | Tagged , | Leave a comment

## Intro To Crypto, PT3: DES and the Feistel Network

Welcome to Part 3 of my intro to cryptography course review, in this post I decided to go over DES and Feistel Networks.

## Block Cipher

Block Ciphers are built by Iteration. They take some PT Block, pass it into an E or D function, which uses another input key which results in a CT block. R(k, m) is called a “round function” which take the initial key k, use some form of key expansion to create “round keys” which are then used for every round of encryption.

## DES

The Data Encryption Standard (DES) is an encryption standard that was created in the early 70’s with work from IBM and NBS(now NIST). The core idea of DES is using a construction called a Feistel Network.

Given functions f1, …, fd: {0,1}^n → {0,1}^n
Goal: Build invertible function F: {0,1}^2n → {0,1}^2n

### Feistel Network

A Feistel Network has the following construction: Now there must be some way to prove that this construction for the Feistel Network can be inverted so that we can perform decryption. Remember that for useful encryption we must be able to encrypt and decrypt the original message m. The only time we don’t need this is when performing an integrity check but this is out of scope for DES.

Referring to the above diagram we can see how the Feistel Network performs its encryption, to determine the construction for decryption we can actually just reverse the diagram to give us the construction and then repeat it d amount of times. Now that we have an idea how we can perform E and D we need to ask how many rounds do we actually need to be secure. It turns out that DES uses 16 rounds of the Feistel Network, starting with a 64 bit input and outputting a 64 bit ciphertext. All that remains now is to determine what this function f() does during the ecryption and decryption process.

Below is an image of the flow of the function f(ki, x) where x is the input message and ki is the respective key for that round. It should be noted that in this diagram the function E stands for expansion and not for encryption. The reason this is required is because the input message is 32 bits but the round keys for DES are 48 bits. Therefore the message must first be expanded to match the length of the key. Courtesy of Wikipedia (https://en.wikipedia.org/wiki/File:Data_Encription_Standard_Flow_Diagram.svg)

It is also a question as to what these “S-Boxes” are and for this post we wont go in detail but we just need to remember that they perform a type of mapping on 6 bit inputs and give some 4 bit output. This allows the 48 bit input to once again be shrunk to 32 bits.

### The DES Challenge

In 1997 RSA security sponsored contests offering a prize to the first team that broke the DES enrypted CT.

1997 – The DESCHALL Project used an internet cluster of machines (thousands of computers) to break the CT within 3 months using a brute force attack to generate all possible DES keys.

1998 – The EFF built a machine called ‘Deep Crack’ that took 3 days to brute force the same attack.

On modern computers the brute force attack can be done in less than a day with a very affordable budget. Due to this reason we have to acknowledge that plain DES is no longer secure and that 56-bit ciphers should no longer be used as the keyspace is far too small.

With that we have taken a good look into the operations of DES, how it works and what lessons we can take away from its weaknesses and strengths. In the next post in this series I will go over how DES was strengthened to overcome this issue of small key space (hint: 3DES). And as always if you have suggestions please do provide feedback in the comments, I really appreciate it!

Feature Image courtesy of renjith krishnan at FreeDigitalPhotos.net

Posted in Cryptography, Security | Tagged , , | Leave a comment

## Intro To Crypto, PT2

Welcome to part 2 of my posts on the material from the “Cryptography I” course. In this course the material is designed to teach students about how to scientifically determine what concepts/algorithms are and are not cryptographically secure. I am continuing this series of posts so that I can have some way to review the information at a later time as well as to provide my interpretation of the material for others to read and comment on. Enjoy!

## Review

In the first part I cover topics like Probability Distributions (Uniform and Point), Events, Randomized Algorithms and Independence.

## XOR

XOR proves to be a very important aspect of cryptography and is actually use in many ciphers like OTP, DES and AES. Therefore if you are not familiar with the concept of XOR, now would be a good time to read up on it. Basically the following truth table provides the mapping for XOR given X and Y inputs.

```(X, Y) | Z
===========
(0, 0) | 0
(0, 1) | 1
(1, 0) | 1
(1, 1) | 0```

## Symmetric Ciphers

A Symmetric Cipher is one that uses the same key (k) for the sender and receiver. For example if Alice sends Bob a message, both Alice and Bob must use the same key to be able to securely send and receive messages.

More formally we can define it such that over a key space of all possible keys (K), a message space of all possible messages (M) and a cipher text space of all possible CT (C) the cipher is efficient where encryption (E) and decryption (D):

```E: K x M -> C
D: K x C -> M
such that... ∀ m in M, k in K: D(k, E(k, m)) = m

∀ = For all```

Note that E is often randomized but D is always deterministic!

## The One Time Pad (OTP)

From Wikipedia:
First described by Frank Miller in 1882, the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert S. Vernam for the XOR operation used for the encryption of a one-time pad. It is derived from the Vernam cipher, named after Gilbert Vernam, one of its inventors. Vernam’s system was a cipher that combined a message with a key read from a punched tape.

OTP uses a very simple concept of XORing the plaintext message with the key to produce some ciphertext (c) that cannot easily be decrypted.

```ciphertext: E(k, m) = k ⊕ m
plaintext: D(k, c) = k ⊕ c

Note: D(k, E(k, m)) = D(k, k ⊕ m) = k ⊕ (k ⊕ m) = (k ⊕ k) ⊕ m = m

Example:

m = 0110001
k = 1101110
c = 1011111```

Note that k must be at least as long as m to securely encrypt m

As we can see OTP is a very fast encryption and decryption cipher but it needs long keys. Now however we really need to ask what defines a “good” cipher?

## Perfect Secrecy

The basic idea to secure a plaintext (PT) message is that the ciphertext (CT) should reveal no information about the PT.

More formally, a cipher (E, D) over (K, M, C) has *perfect secrecy* IF..

```∀ m0, m1 in M, (len(m0) = len(m1))
∀ c in C, Pr[E(k, m0) = c] = Pr[E(k, m1) = c]
Where k in uniform (truly random sample) in K.```

This means if we intercept c, we cannot tell if the plaintext is m0 or m1, and must be true for all possible m.

### OTP

At this point we have defined what a secure cipher is and now should look at OTP and see if we can prove that it has perfect secrecy.

```∀ m,c: Pr[E(k, m) = c] = (# keys k in K such that E(k, m) = c)/|K|
So... ∀ m,c: #{k in K: E(k, m) = c} = constant
If this is true then the cipher has perfect secrecy!```

So… Let m in M and c in C, how many OTP keys map m -> c?
For OTP, if E(k, m) = c:

```k ⊕ m = c
k = m ⊕ c```

Therefore For all m,c: #{k in K: E(k, m) = c} = 1
OTP has perfect secrecy and so there is no possible CT attack. This however does not mean that OTP does not have other possible attacks.

### Bad News

This idea of perfect secrecy is pretty useful as it allows us to easily define in a cipher is secure or not. However it comes with a major drawback, that |K| >= |M| which means that if we have a very long message, say 1GB then the key must also be *at least* that long. This limitation makes it difficult to use OTP in practice but we can still use it to observe certain properties about XOR and cryptography.

## Stream Ciphers

To make OTP practical we must use something called a Pseudo Random Generator (PRG) to replace our totally random key with a pseudo random key such that:

```PRG: function G: {0,1}^s -> {0,1}^n, where n >> s
G must be "efficient" and its output should "look" random.```

So the question we ask now is: “Is this a secure method? Does it guarantee perfect secrecy?”. The answer is no, this is not meeting our previous definition of secure since the size of the key is much smaller than the message.

What this means is we must use a less strict definition of “secure” as perfect secrecy cannot be used practically. Instead we will make a definition such that security depends on the specific PRG’s ability to generate a “random” result.

## PRG

Suppose that a PRG is predictable:

`∃ i (index): G(k) from [1 .. i] ---> G(k) from [i+1,...,h]`

In practice this can mean that if a CT has a predictable header, like say an SMTP message then the attacker can use this to predict the rest of the message. This is because they can use the predictable nature of the PRG to easily guess what follows the known header.

Formally it can be written like:

```∃ some "efficient" algorithm A and there exists some 1 <= i <= n - 1 such that:
Pr[A(G(k)) from 1..i = G(k) of i+1] >= 1/2 + ε```

For some non-negligible epsilon (ex. ε >= 1/2^30), the generator is predictable!

Therefore an unpredictable PRG is one that meets the following constraint:
∀ i: no efficient adversary can predict bit i+1 for non-negligible ε.

## Attacks On OTP

Finally to end this post I wanted to cover the attacks on OTP that are discussed in the course as they are quite interesting.

### Attack 1

The “Two Time Pad” is very insecure!
The important lesson to remember from this attack is to NEVER EVER use a stream cipher key more than one time. Lets look at this in practice.

```Suppose that c1 = m1 ⊕ PRG(k)
Suppose that c2 = m2 ⊕ PRG(k)```

If the attacker has c1 and c2, they can perform a simple operation to determine the key (k)!

```PRG(k) = 101000
m1 = 011011, c1 = 110011
m2 = 110010, c2 = 011010

c1 ⊕ c2 = 101001
m1 ⊕ m2 = 101001 (look familiar?)```

Therefore when using OTP for client server interaction, it is important that the client and server generate a new key each time one of them must send a message, this is very impractical!

### Attack 2

OTP provides no message integrity, we can see this in a very simple example:

Suppose that user sends the CT to the adversary but without the key (ie. m ⊕ k). The adversary can then take that CT and XOR it to some known key (p) such that:

`j = (m ⊕ k) ⊕ p`

The adversary can then send j to user and when decrypted the result will actually be m ⊕ p (since XORs on same value cancel out).

In a more practical example, suppose the adversary knew the header of the message was “From: Bob”, they could XOR it with a key in such a way that when the user decrypts the message it ends up reading “From: Eve”!

## Conclusion

Hopefully you found this post interesting and informative. I know there was a lot of reading but I think its worth learning all these concepts because they really do provide the fundamentals for what comes next in the material. I hope to write the next part of this series soon so stay tuned!

Feature Image courtesy of lekkyjustdoit at FreeDigitalPhotos.net

Posted in Cryptography, Security | Tagged , | Leave a comment

## Intro To Crypto, PT1

Well it will take some work, security is not like what they show on TV. You don’t need green on black text, special goggles or an unlimited enhance function. Instead, it requires sitting down and understanding the history of the field, what it means to be “secure” and what limitations or assumptions you can work under. This summer I have decided to start my journey on the vast field of cryptography and am doing an online course at Stanford University that provides an introduction to cryptography. It is appropriately named “Cryptography I” and is the first part of a two part course, the second part being offered later in the Fall. Both are taught by a really awesome professor Dan Boneh who I find explains the material very well. I decided I would like to make some posts about what I have learned in this course as I go through the material so that I can share my knowledge and get a chance to write it down somewhere for later reference.

## The Start

Cryptography provides two important functions, it allows users to have a secret key that they can establish and share, as well as allowing a secure communication that provides confidentiality and integrity of the message. Crypto can also provide authentication, anonymity and even a digital currency!

To begin a journey into the world of crypto we have to understand what the basic steps are to perform accurate research. The course outlines these in 3 basic steps:

Precisely specify a threat model
Example: a digital signature cannot be forged by an adversary

Propose some construction

Prove that breaking the construction threat model will solve the underlying hard problem
Example: any attacker who is able to attack the construction under the threat model is unable to solve the hard problem, the construction cannot be broken

## The Basics

Before the real crypto analysis can begin however, there needs to be some understanding of the key concepts that are used in the course. These are Probability Distributions, Events, Randomized Algorithms and Independence.

### Probability Distributions

Suppose you have some set U ( our ‘universe’) which is a finite set, for example U = {0,1}^n. What this notation means is that U is a set which can consist of either 0 or 1, n amount of times. So for example if n was 2 then the following values would be present inside of U: {00, 01, 10, 11}.

Now that there is a universe to limit the calculation, the next step is to understand the two types of distribution. Note that the total of the probabilities always adds up to 1.

#### Uniform Distribution

In a uniform distribution the probability of a certain element of the set being selected is exactly equal to 1/|U|. Note that |U| means the size of the set U. So in the example above, a truly random distribution has probability 1/4 of selecting any specific element.

#### Point Distribution

In a point distribution the probabilities are not evenly distributed, meaning that some elements have a higher chance of being selected than others. For example in the set U defined above, we could have the following probabilities:

Elements:    00  01  10  11
Probability:  0   0   1   0   = 1

### Events

For the set U, our universe, a certain subset of it is called an “Event”. For example if we define A so that it contains all elements which have the last 2 least significant bits equal to 11. What we can then do is use the concept of distribution from above to dermine what the chance of a certain event is.

Lets for example say:

U = {0,1}^8
A = {all x in U such that last 2 LSB(x) = 11}
For a uniform distribution on U: Pr[A] = ?

Well, we can look at U and quickly determine that there are 64 elements inside of it that end in 11. We also know that |U| = 2^8 = 256. We also know that the probability of one of these being selected in 1/256. Therefore we can simply do: 64 * 1/256 = 1/4. This means that Pr[A] = 1/4!

### Randomized Algorithms

Lets first look at what a Deterministic Algorithm is before we define what a Randomized Algorithm is. Imagine you have two sets, a set of inputs (which we will call m) and a set of outputs that these inputs map to (which we call A(m)). In a deterministic algorithm, a certain m always maps to a certain A(m). This is because the function A() only takes in m as the input and so will always give the same output.

A Randomized Algorithm on the other hand can be defined in a way that A() takes in two arguments. We can say that m will map to A(m, r) where r is a random sample from a set {0,1}^n. If you are familiar with the concept of a “seed” or “salt” then this is basically what r is, it will allow us to manipulate the result so that even given a consistent value for m, we still output a different value for A(m, r). Note that this assumption relies on r never being repeated.

### Independence

If we are concerned about the fact that two events are independent we need some formal definition that we can use. Thus the following will provide this function:

Random variables X,Y take values from the set V and are independent if:
For all a,b in V: Pr[X=a AND Y=b] = Pr[X=a] * Pr[Y=b]

Lets look at an example to make sense of this definition. Suppose you have a coin and you want to know if the first coin flip influences the second. We need to show that these events are independent. Suppose also that we record them in a way that the first result is followed by the second. For example if you got two tails in a row, the result would be: 00 but if you get a tail and then heads: 01.

More formally we can say:

U = {0,1}^2 = {00, 01, 10, 11} and r samples randomly from U.
Pr[X=0 AND Y=0] = Pr[r=00] = 1/4

We also know that if U = {0,1}^1 = {0, 1}:
Pr[X=0] = 1/2, Pr[Y=0] = 1/2

Therefore:
Pr[X=0 AND Y=0] = Pr[X=0] * Pr[Y=0] = 1/2 * 1/2 = 1/4

Thus we can prove that these events are truly independent.

## Conclusion

Phew, that was a lot of writing! But that is the basics of the course, in the next post I hope to go over some more material that builds a foundation for the course. If there are any questions, feel free to ask below and if you have not already I highly recommend you check out the course here.

Feature Image courtesy of freedooom at FreeDigitalPhotos.net

Posted in Cryptography, Security | Tagged , | 1 Comment

## Firefox Contextual Identities

Mozilla recently announced a new feature that is being tested in the Firefox browser called “Contextual Identities”. The idea behind this feature is that users will be able to separate different types of browsing into different identities, allowing them to protect their data with more control. The images below were all taken from the announcement page and should provide a good example of how this feature works. Unlike Private Browsing Mode which is a temporary store, and wipes everything when it ends, these Contextual Identities will allow storage of certain data so that things like history are not lost but cookies and other sensitive data is still protected. You may wonder what makes these different from profiles/users and from what I can tell, its simply that these are more seamless and can be loaded at the same time so all you need to look for is the visual cues to know what context you are browsing in.   All in all I think this is a very neat feature and look forward to seeing how Mozilla improves it in the future (also if Chrome will implement a similar feature). If you are interested in learning more about this feature and how to use it, head on over to the Mozilla announcement here and check it out.

Posted in Browsers, Firefox, Security | Tagged , , , | Leave a comment

## Aeon: The Songs Of The Wolves

Something a bit different for my blog but I couldnt help but share this very cool article by Holly Root-Gutteridge I saw on twitter about how wolves (and other animals) have different tunes/patterns when communicating depending on the region they are from. It is pretty fascinating reading about how this research showed that the animals have evolved to not just be physically adapted to their environment but also audibly different from one another. The full article is somewhat long but definitely worth a read.

The question of when and how language first emerged is the topic of tremendous controversy – it has even been called ‘the hardest question in science’. My work is on what information can be extracted from vocalisations. It is a first step in understanding where the physical body dictates the shape and form of the call, and where the caller has control. For example, a piano player is limited to combinations of a piano’s 88 keys, but a song played on a Steinway will have different sound qualities to the same song on a bar’s upright. In addition, different tunes can also be played. Separating the characteristics of the instrument from the choices of the player is essential before we can understand what meaning those choices might convey.

More questions follow. If howls from different subspecies are different, do the howls convey the same message? Is there a shared culture of howl-meanings, where an aggressive howl from a European wolf means the same thing as an aggressive howl of a Himalayan? And can a coyote differentiate between a red wolf howling with aggressive intent and one advertising the desire to mate? Even without grammar or syntax, howls can convey intent, and if the shape of the howl changes enough while the intent remains constant, the foundations of distinctive culture can begin to appear.

Full story here.

Posted in Biology, Linguistics, Science | Tagged , , , | Leave a comment

Interesting read: https://www.troyhunt.com/heres-how-i-verify-data-breaches/

Link | Posted on | Tagged , | Leave a comment