Enabling Factors of Authentication

With the ongoing onslaught of high profile security breaches, it’s fast become a no-brainer that we should use different passwords for each service we subscribe to, and that they should be long and complex. Unfortunately, not everyone may be able to remember something like ‘ZG9uJ3QgZG8gdGhpcw==’, so many services have implemented Multi Factor Authentication to help better defend users.

Password based authentication has long since remained the most popular way of logging in to common services. It’s something a user knows. Multi-Factor Authentication serves to add various identifying factors to make it harder for attackers to access accounts that aren’t theirs. Other examples of factors we can add include something a user has (cryptographic token, phone) or something a user is (features, such as biometrics).

Two Factors are better than one

Many common services today provide the option of enabling Two Factor Authentication (2FA). This uses either a time-based code generated by an Authenticator app (such as Google Authenticator, or Facebook’s Code Generator), or SMS text messages sent to a mobile device. The two factors in question here are something one knows (password) and something one has (phone).

2FA also can be used to provide a way to find out if an account is currently being targeted, since if the password is entered, some services can send users an SMS code to facilitate login. Without this code, a malicious actor couldn’t enter the system, but the user would be made aware of the login attempt and could proceed to change their password.

How it works

After enabling 2FA, when logging in to services online, users are shown the typical password prompt and an additional step is shown on successful entry. Users are prompted for a short code generated either by a corresponding authenticator app or received via SMS. Some systems which are not web-based, such as mobile apps, may not support 2FA login. For these cases, users can generate application-specific passwords. Browsers can also be remembered to prevent the need for entering a code on every login.

This type of verification is commonplace in the banking industry, and we are already somewhat accustomed to entering a security code generated by a separate device when carrying out transactions. Now, this additional layer of security can be turned on in many more places.

To help reduce the chances of that Twitter account someday sending spam to its followers, that Facebook profile being defaced with questionable content for all to see, and to make those files on Dropbox safer from prying eyes, consider turning on 2FA for the following systems:

Facebook Login Approvals

1) Visit https://www.facebook.com/settings

2) Navigate to the Security tab

3) Enable the Code Generator (and SMS codes if desired)

Image

4) Enable login approvals

Image

The mobile app will now have an option to invoke the Code Generator, as shown below on iOS:

Image             Image

Twitter login verification

1) Navigate to https://twitter.com/settings/security

2) Opt-in to send login verification requests either via SMS or the app

Image

3) Follow the instructions to generate a temporary password for other apps if necessary

4) While we’re here, click the option to “require personal information to reset my password too”

5) If so inclined, uncheck the option to “tailor ads based on information shared by ad partners”

Dropbox two-step verification

1) Visit https://www.dropbox.com/account/security

2) Enable two-step verification

Google 2-step verification

1) Sign in to Google Account at www.google.com

2) Follow the instructions at http://accounts.google.com/SmsAuthConfig

3) Optionally download the Google Authenticator mobile app

Microsoft two-step verification

1) Sign in at https://login.live.com

2) Navigate to https://account.live.com/Proofs/Manage

3) Enable two-step verification

WordPress Two Step Authentication

1) Sign in to WordPress and navigate to https://wordpress.com/settings/security/

2) Enable the option for Two Step Authentication for SMS/Authenticator

Image

Github Two-Factor Authentication

1) Sign in to Github and visit https://github.com/settings/admin

2) Enable the option for Two-Factor Authentication

Image

LinkedIn Two-Step Verification

1) Navigate to https://www.linkedin.com/settings/

2) Select the option to manage security settings

3) Click turn on two-step verification

2FA all the services

A range of other websites also support 2FA, including PayPal, Steam and Amazon Web Services – so this is certainly not an exhaustive list.

Be sure to take note of the backup codes provided by any service for which 2FA is enabled, and store them in a safe place. It is also worth mentioning that the Authenticator mobile app has had problems in the past which resulted in users unable to generate codes, so always keep backups and/or the SMS option available.

For those who are interested in locking down their security further, take note that enabling 2FA in such a way adds access to the device providing the codes as a potential attack vector. Consider turning off SMS message previews during the lock screen in mobile devices, so that the codes aren’t freely displayed to anyone who happens to glance over.

With users’ attitudes beginning to shift in favour of authentication mechanisms that look beyond passwords, one can hope the rate at which accounts are ‘hacked’ can slow down. Above all, the first point of entry remains the password, so keep them strong, complex and dissimilar to dictionary words. If possible, use a passphrase.

Alex Kara

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Circles Of Security

It’s a good time to be a hacker. I use that as the umbrella term to describe all people involved in any kind of security research. Now more than ever, it is difficult to find any operation computer security isn’t a paramount concern in, and security researchers lie at the centre of defending the systems which hold our data.

Over the last two months I’ve had the privilege in attending the DEFCON hacking conference in Las Vegas for the first time, as well as a summer school for the Design and Security of Cryptographic Algorithms in Bulgaria, hosted by KU Leuven. At the latter, I’d found myself discussing topics ranging from the perils of using RC4 schemes in TLS, linear attacks on cryptosystems, and Galois Theory with academic experts from around the world, including Dr Rijmen, the co-designer of AES.

I attended DEFCON with Sorin, also from Imperial, and we both learnt more of the professional paranoia necessary to be a security researcher. I met a great deal of interesting people with similar thought processes, and quickly learnt that mathematically obsessing over problems and trying to break things was a welcome norm there.

By contrasting these two conferences, I also learnt that the gap between cryptographers (mathematicians) and cryptographic engineers (implementers) appears quite wide. While one may be concerned with improving a cipher algorithm’s S-Boxes such that they may be more resistant to linear attack, the other may be busy finding ways of protecting an implementation from fault injection or software bugs which leak intermediate values.

Although it would appear this gap is closing and more software engineers are concerned with the underlying maths of what they write, one thing still troubles me:

Our security today still largely relies on either the computational or mathematical difficulty of solving certain problems.

This was the primary focus of a Black Hat presentation [PDF] widely referenced as warning of a so-called ‘Cryptopocalypse’. Bruce Schneier blogged with a commentary on this easing the scaremongering it produced. The talk suggested that as advances in maths become more frequent, and as computers become more powerful, we may soon see practical ways of factoring RSA encrypted keys and Diffie-Hellman key exchange may fall.

All of the mainstream ciphers used today rely on the difficulty of solving some problem. After some time, these problems may become easier – whether it’s solving the discrete logarithm problem or brute forcing the DES keys in our SIM cards.

What happens then, if it can be found that the computational complexity problem yields that P = NP? Will we then have run out of problems which cannot be solved efficiently in polynomial time?

Instead of relying on the hardness of problems for security, and extending the number of bits or making things harder whenever people get closer to practical solutions, we should consider designing cryptosystems for which the math is not merely resistant, but provably secure.

Vernam’s One Time Pad cipher is an example of a cryptosystem which is considered information-theoretically secure. This cipher provides perfect security, but has a great deal of impracticalities when considering real world use. Today, we see companies extending their RSA key lengths and adopting perfect forward secrecy, reducing the likelihood of decrypting years of data from one compromised session key.

Perhaps much in the same way companies have done this, as well as having adopted multi-factor authentication with time based code generation, we can investigate means of adopting OTP as a practical cipher. Arguably, this would reduce the problem of exhaustive key search to finding problems with the entropy of random number generators, though this has always been a problem.

Every layer of the OSI model has its own security considerations, and even the underlying mathematics underpinning the standard libraries we use for encryption have their own fields. It is a challenge to acquire an expertise across all of these circles of security, but good things often happen when people focusing in different areas are brought together.

Maybe the next addition to this spectrum of security groups will be physicists. Once we are able to produce and maintain stronger quantum computers, quantum cryptography will potentially hold paradigm-shifting benefits to local and online privacy.

The programmers of tomorrow may not be coding on classical machines at all, and an intimate understanding of the laws surrounding the quantum mechanics involved may become just as important as knowledge of the mathematics of security are to cryptographic engineers today.

— Alex Kara

Posted in Uncategorized | Tagged , , | Leave a comment

Trustworthy Computing & The Security Development Lifecycle

On January 15, 2002, Bill Gates distributed a memo titled ‘Trustworthy Computing’. This memo outlined his vision for reinforcing security as a serious factor during application development, and he declared that when facing a choice between adding features and resolving security issues in development, security must always be prioritised. An excerpt:

“If we discover a risk that a feature could compromise someone’s privacy, that problem gets solved first. If there is any way we can better protect important data and minimize downtime, we should focus on this. These principles should apply at every stage of the development cycle of every kind of software we create, from operating systems and desktop applications to global Web services.”

From this, the Security Development Lifecycle (SDL) was eventually born, a process for reducing software maintenance costs while improving product reliability related to security through a combination of documented phases such as threat modelling and attack surface analysis.

On the 13th of May, I used part of my scholarship from the Royal Academy of Engineering to fly out to the United States to attend Microsoft’s Security Development Conference 2013. Held in San Francisco, California, the two-day event served to improve the awareness of both the technical and developmental challenges faced with writing secure software, and to promote use of the SDL.

With this being my first time in the US, I quickly gained an appreciation for the cultural and climatic differences between London and SF, and my experience was most gratifying. I held skepticism that such a place could exist where rain wasn’t as predictable as the sun coming up, but I was swiftly proven wrong. I landed sometime in the afternoon, and though my watch told me it was 11pm London time, I didn’t feel like acting like it were true.

I was fortunate enough to have a long-lost friend living relatively nearby, and we used the opportunity to catch up and shape my US experience with definitive positivity. I quickly learned that I don’t get jet lag, we ate unhealthy American burgers, healthier (and real!) Mexican food, thoroughly experienced the SF culture and saw the sights.

security development conference

It’s like being back in uni again!

But when we weren’t out being tourists, I was sat in the InterContinental, in true lecture-fashion, hearing talks from a variety of organisations including Adobe, Cisco, IBM, Twitter, Verizon and HP, learning about the SDL. The conference kicked off with Steve Lipner, Scott Charney and Howard Schmidt giving an account of the early days of trustworthy computing at Microsoft and beyond.

The day continued with a strong emphasis on the ISO 27034-1 standard, with talks split up into three tracks: Engineering for Secure Data, SDL & Data Security, and Business Risk & Data Security. I chose to mostly attend the talks in the engineering track, and though they weren’t as technical as I’d hoped, they provided useful considerations related to secure development for programmers.

“Security at development time is rapidly becoming conventional wisdom”

Standards shouldn’t be prescriptive, or industry will invent their own ways of improving faster than they can catch up. An example we were given was the case of the Data Encryption Standard (DES) taking a significant amount of time to improve from 56 bit keys to 3DES, and eventually AES. By the time this had happened, industry had moved on and solved the encryption strength problem themselves.

“A 472% increase in Android malware since 2011”

A number of Gartner studies were referenced in talks on mobile application security, with a reported 5.6 billion mobile connections existing today. It was therefore very important that the basics of security were honoured by developers, who are guilty of over provisioning the permissions for their apps. They should:

  • Using HTTPS not only for posting sensitive data, but also for presenting the forms to resist snooping
  • Using a suitable cryptographic cipher suite with appropriately strong keys for encryption
  • Sandboxing, for robustness from datamining and brute force attacks
  • Static application security testing for XSS / SQLi remediation
  • Identifying taint propagation by threat modelling and tracing inputs from source to sink
  • Keeping interesting information out of the source code
  • Mitigating side channel data leakage by checking caches/logs/clipboards
elevation of privilege

Elevation of Privilege – A threat modelling card game

These points, though seemingly obvious, appear to be forgotten in some high profile cases out there today. The majority of vulnerabilities are still dominated by stored XSS and SQLi exploits, and though it has taken roughly a decade (2000 – 2012) to go from a ‘do nothing’ approach to a reactive approach, we are just beginning the transition into the proactive.

Security without usability, however, is paralyzing. We heard about the importance of keeping software functional, secure and usable in a talk on cloud security, measured boot and UEFI. We were given a demonstration of the use of the Trusted Platform Module (TPM) for whitelisting/blacklisting firmware changes. The implications were that one could prevent access to internal resources in an enterprise environment if devices did not comply with policies such as being on the latest OS update. I was rather skeptical about this talk, since I felt there was a contradiction in the promise of usability and the action of firmware based blocking for security.

We must keep security transparent to the user, and if many users cannot get online due to an OS update coming out the previous day, both usability and productivity are impacted. Yet, we learn nothing about the security of the devices – we simply block them for not being up to date. This, though preventative, to me borders on software based paranoia, and feels like it would yield far more problems than it hopes to solve.

The most technical talk I experienced was one on Integral Security, given by Robert C. Seacord from CERT. He was asked to give, in one hour, a talk which he said would typically need the day. We heard an extremely fast paced whirlwind tour of integer types in the C standard, with discussions on wraparound and overflow. Hearing the story of a wraparound error in a 16 bit counter causing Comair to ground 1100 flights on Christmas Day in 2004 helped emphasise the need for careful bounds checking and type safety.

Most programmers wouldn’t think twice before adding an unsigned char to a signed char, then storing the result in an int, which is terribly bad practice. We saw examples of arithmetic overflow in operations where it might not be obvious, such as division, modulo and in comparison operations. As C programmers we must avoid conversions that result in the loss of value or sign, and we must be mindful that conversions to a type with greater rank or signedness are safe. The bottom line:

“In C world, know what you’re doing and be careful”

On the second day of the conference, we heard talks on Single Sign On, Claims Based Authentication, a roundup of SDL adoption and a rather curious talk by Brad Arkin, Chief Security Officer at Adobe, titled “Accepting Defeat and Changing The Battle Plan”.

He called out his experience that ‘making software more secure by finding and fixing vulnerabilities in code’ was a ‘complete waste of time’, and backed up his claim with cases where Adobe Reader and Flash Player suffered their greatest series of zero-day vulnerabilities after extensive fuzz-testing and fixing. His emphasis here was on tackling dogma in the security profession, explaining that “you don’t have to outrun the bear, just the last person”.

Adobe Reader zero days

Adobe Reader zero days after fuzz-testing

This talk was somewhat unsettling for me, as I couldn’t help but feel a tone of defeatism about the topic. Though fuzz-testing had no impact on the number of attacks Adobe suffered, I would consider the case of there being significantly more exploits against them had there not been any tests. He explained that by outrunning the last person rather than the bear, they reached a point where “vulnerabilities were still there, but mitigations made it harder to exploit, so attackers moved to exploiting SWF in Office”. It seemed like the priority here was not to write secure code, but to shift hackers’ targets onto others. By dismissing fuzz-testing and fixing as pointless, I felt one could misinterpret the talk into using it as ammunition against adopting SDL practices, which I found counterproductive.

Though it may be the case that developers won’t write mathematically provable secure code in practice as they did in University, the attitude that it will fail from the outset is toxic. I believe the same level of effort we see put into reasoning about concurrent code for deadlock prevention should be put into secure development, and the attitude from the outset should be to obsessively strive for robust, functional and secure code, from the ground up. Fuzz-testing may not change the fact that the security model of an application is flawed, but its value is significant and essential in the first few stages. I do, however, understand that it isn’t a one-size-fits-all solution to closing vulnerabilities – careful work and investment will be necessary for fixing what’s left. It is these concepts I feel were argued against rather than the dogma in this talk.

Google I/O

Google I/O at the Moscone West Conference Center

Ultimately, I gained enough from the Security Development Conference overall to further realise the significance of security considerations from ground up when designing software. The talks were motivating and useful, with a bit of fun in the form of PHP-bashing too. I met developers from various startups, and some cool folks from NIST. It also turned out that we were two blocks from the Google I/O conference, which explained the numerous sightings of people wearing Google Glass.

I returned to London with eager anticipation for the next time I go back to the US, as the conference and my experiences have given me much to reflect on. One quote in particular I liked from Robert C. Seacord was that:

“Compiler warnings are the First line of defence. The Zeroth line of defence is knowing how to code correctly”.

So I guess ‘real’ programmers do pay attention to warnings after all…

– Alex Kara

All views expressed in this post are those of the author and are not representative of any associated entity.

Posted in Uncategorized | Tagged | Leave a comment

The Computer Revolution Hasn’t Begun Yet

On March 12, I attended the inaugural Global Grand Challenges Summit at the IET, formed by the Royal Academy of Engineering in partnership with the National Academies of the US and China. The event served to establish a platform for leading engineers, scientists and others to discuss today’s great challenges.

While similar events have often focused on more traditional forms of engineering such as building bridges, water defence and engine design, this event was an exception which for me marked a paradigm shift – there was remarkable emphasis on computer science and biology. What was once traditional in engineering is now completely different.

On the first day, Dr Craig Venter gave a plenary address, founder of the J. Craig Venter Institute and the scientist who led the first sequencing of the human genome. His talk described a vision of the future where synthetic viruses created to combat pandemics can be coded and distributed to the world online. Digital security already plays a big part of our lives, but with the advent of 3D printers and the dawn of the era of synthetic biology, cryptography will play a key part in protecting lives across the world, whether it be in the delivery mechanism (the network) or the device (physical or biological).

The summit’s panels, chaired by Jim Al-Khalili, discussed the importance of raising awareness and educating not only the general public, but also the next generation of kids on the engineering which surrounds us, and online learning platforms are starting to emerge to address this. Craig Venter explained that ‘disruptive change is needed, and synthetic life will be part of that change’, further joking that living in California, ‘we already have many synthetic humans walking around’. Hinting at his lab coming close to publishing more work on producing synthetic life, his serious and mildly humoured account of work in the field emphasised the impact it will have to our daily lives.

For me, it would appear the engineering community appears to have undergone two interesting transitions in the years I’ve been attending events. The first is that there is now more of an extreme appreciation for and emphasis on the scientific, mathematical and computing advances that have paved the way for improving the quality of life of both developed and developing countries. The second, however, is the tendency to emphasise educating the next, younger generation to join the field and solve our problems. I can’t help but admit despite being 21, I felt as if I were too old to make a difference, with the way such urgency on the next generation was portrayed. Though I understand the sentiments, I believe we need to lead by example, by not only educating the next generation, but demonstrating great work and science ourselves to show where that education leads. I felt an air of defeatism in the way some of these points were made, but I suspect such provocation may have been deliberate to get us motivated to keep doing what we do.

Communicating science well plays a big part in not just inspiring the next generation of researchers, but also in public opinion of our work. I learnt that over 50% of the world’s genetically modified food resided in developing countries – they rely on them in order to solve local sustainability problems for their crops. If we can effectively describe the science going on, people can get the full story and make fully informed conclusions regarding such controversial topics.

We had a surprise guest appearance by will.i.am, and Bill Gates appeared via video link to encourage the event ‘getting the best minds together to think about how to lift our living conditions’. I live tweeted throughout the event under the hashtag #GGCS2013 and found the summit extremely rewarding, motivating and an inspiration to continue to put in my best efforts in directing my work towards solving grand challenges and improving our quality of life.

I was particularly keen on the analogy of the cell and genetic data to ‘software that writes its own hardware’, vaguely linking to Turing’s Universal Machine. I’ve had Turing machines on my mind quite a lot these days, thinking about the shortcomings of the Von Neumann architecture. I find it interesting that many computer scientists, myself included, have directed their efforts to problems in biology or medicine. Turing’s paper on The Chemical Basis of Morphogenesis is possibly the first example of this – he pioneered the fundamentals of what it means to compute algorithms, and started applying this to mathematically explain how different biological shapes form. In the process of doing so, his reaction-diffusion ideas can also be seen as an early discussion of Chaos Theory. The fundamental which links all these fields together is clearly the maths.

I’m still in the process of getting my research on muscle interactions published, it’s come a very long way since my dissertation, which was Highly Commended in the Undergraduate Awards 2012. I’ve been working with my supervisors from the National Heart & Lung Institute to get our paper into a second draft and will be meeting with my Computing supervisor at Imperial sometime next month to discuss submission to the journal we’re going for.

Besides that, I’ve been keeping myself busy and became a Fellow of the Royal Institution of Great Britain. I’ve gone to a fair amount of their events now, and they’ve been consistently inspiring and intellectually stimulating. I also signed up to a few edX online courses, notably Quantum Mechanics and Quantum Computation from Berkley, and Introduction To Biology from MIT. I’ve been averaging 100% on my weekly assignments for QM (with a lot of hard work) and I’m looking forward to completing the course in a few weeks. My progress in the biology work hasn’t been as good, however, but I’m still on course to pass above 60% there.

I think it’s important to keep up with fields outside one’s area of expertise in order to retain the ability to craft cross-disciplinary solutions to big problems. There are so many avenues to explore and not enough scientists to research them, and one can think outside the box much clearer when initially starting outside its boundaries. We’re still scratching the surface with what we can do with computer science, and perhaps such models as Turing’s can be better realised in full with biological hardware, or perhaps it’ll be something else. Why should software and hardware be different at all?

One of the themes of the Grand Challenges Summit provides the thought-provoking sentiment that despite the incredible advancements undertaken to get where we are today, the computer revolution hasn’t begun yet, all we do know is that science will be 100% of the solution.

Thinking on,

Alexander Karapetian

Posted in Uncategorized | Tagged , , | Leave a comment

On The Millennium Prize Problems

I believe the art of developing software can be ever improved with awareness of the underlying maths. I wrote the following winning submission to the RCSU Science Challenge a while ago. I figured it would be appropriate to post here to serve as a reminder of why it is often important to remember the academic side of things:

Computer Science, (not programming), is an often misunderstood field. Many people working outside the area do not perceive the breadth and depth of research material that’s pumped out daily and how it ends up affecting our everyday lives. To observe its profound effect in the real world, we must first examine the theoretical side. There are a variety of problems which are famous in mathematics that are being fiercely researched today: the Millennium Prize Problems and Hilbert’s Problems are collections of such unsolved problems selected by the Clay Mathematics Institute and mathematician David Hilbert. The former are noted for their million-dollar prize for the first verified solution to any one of the seven problems, six of which remain unsolved at the time of writing.

Why should the average person care about finding solutions to them? First, we must realise their real world applications. While some high profile fields are largely saturated in terms of research progress, Computer Science has taken an opposite path and the field is increasingly gaining momentum, with discoveries emerging due to breakthroughs in logic and theoretical computational systems and mathematics.

A core component in electronic circuits is the transistor. The number of transistors within computer processor chips have increased over time roughly according to Moore’s Law, allowing for greater processing power. This trend states the number of transistors will double every 18 months and eventually reach a limit. The limit is frequently extended, and consumers continue to see improvements yearly. This greater processing power is not limited to consumer use, however, and research institutes have begun to take advantage of the speed and efficiency gains to harness the raw power of the integrated circuit and GPU.

If Physics is to be considered the application of Mathematics to the Universe, frequently giving us answers to life’s questions, then Computing is the application of Mathematics to the virtual Universe. For instance, creating a perfect sphere is impossible in the physical realm, though such perfect elements are digitally representable. This expressive property has opened the door to new methods of analysis with machines, using them as an aid to solving existing research problems. For instance, a quantum mechanical system described to us by Physics is best explored by a quantum computer, a machine which is capable of operating on the same physical levels as the very realm we’d be attempting to understand.

The Riemann Hypothesis, considered one of the most important problems in pure mathematics (Borwein et al. 2008), involves the distribution of the prime numbers. The hypothesis states that the solutions to the Riemann-Zeta function lie on a critical line. While no proof yet exists, the first ten trillion values have been verified by distributed supercomputing efforts.

While an underlying pattern appears plausible, this Hilbert Problem remains unproven. The unpredictable nature of the prime numbers has been put to use in the RSA (Rivest, Shamir and Adleman, MIT, 1978) cryptographic algorithm. This system, considered currently unbreakable due to technical infeasibility, provides digital security with primes. Banks, websites and governments worldwide have adopted RSA and it is a common means of distributing an encryption key. A brute force search would need to test possible primes to break this, but since there is no reliable way of determining the next prime, computers may take years to perform this operation, rendering this method impractical. A proof of the Riemann Hypothesis, however, may provide a means of determining a pattern and breaking RSA.

Brute force guessing of standard passwords is also impractical. We are all currently encouraged to create case-sensitive alphanumeric passwords. The problem with checking every possibility lies not with verification; a computer can easily identify whether two pieces of text are equal. It lies with first obtaining the solution to compare. Some techniques search through dictionary entries, allowing quicker identification of common passwords.

Another Millennium Prize Problem, touted the most important unsolved problem in Computer Science, P vs NP (Cook, 1971) revolves around this concept. It asks the question of whether a problem having quick machine verifiable solutions means those solutions can also be found quickly. Problems of the latter are classified P, while those that are hard to compute are NP. In the case of guessing passwords, it becomes apparent that verification is a P problem (easy) while searching for the correct password is NP (hard).

The world currently assumes P not to be equal to NP, as well as most Computer Scientists (Gasarch, 2002) agreeing, while majority of security systems rely on this assumption. A claimed proof of P not being equal to NP (Deolalikar, 2010) was later shown to be incorrect, though the possibility raised many concerns for security. The implications would be far-reaching for society. A correct proof either way will have great impact, since the solution to P vs NP intrinsically links to solutions of the other Problems. If P=NP, not only will a new era of cryptography need to be abruptly ushered in, but NP-hard problems within countless other fields such as Biology (genome sequencing, protein structure prediction) and Physics (simulations) would become easier.

The effects of solutions on society’s widely used systems cannot be ignored. They would pave the way to a once-distant future, with consequences such as the rise of new, future-proof technologies resistant to P=NP attacks, leading to better consumer systems. A hail of advancements in knowledge would be made, with improvements to society’s quality of life due to significant improvements to Biology, Medicine and other fields. Grigori Perelman, responsible for solving the Poincaré Conjecture (involving the characteristics of spheres in higher dimensions) remarked: “Where technology creates new machines and devices, Mathematics creates their analogues – logical methods for analysis in any field of science..”

“Every Mathematical theory, if it’s strong, will sooner or later find an application.” (Perelman, 2003)

As researchers move on to proving the next unsolved theorem armed with potent approaches and technology from Computer Science, the laypeople of society would truly revel in the consequences of such discoveries, and would therefore benefit the most overall. As we all continue to work to build a better future, it’s worth taking a moment to appreciate the maths which brought us here, and with the right ideas and time, where we can go next.

— Alex Kara

Posted in Uncategorized | Leave a comment

The Importance Of Being Compliant

My new year’s resolution is 2880×1800. With that out of the way, I’d like to acknowledge this year drawing to a close with a brief nod to Standards Compliance. Industry standards are important, but should not be based on methods which appear to have stood the test of time alone. They should – and often are – based on methods known to be robust, reliable, scalable and mathematically sound. It is these strengths which truly allow algorithms to stand the test of time, the absence of any reliance on obscurity for security. In light of the countless high profile ‘hacks’ which occurred this year leading to password leaks, one must ask – why does it seem so hard to follow basic principles?

Security programming is hard. There are, however, certain fundamentals which must be satisfied if one is to build a secure system. Passwords should never be stored in a database in plain text. This has been exhaustively repeated, but the point doesn’t seem to be hitting home. Consider Tesco, for instance, who recently came under fire by a UK Watchdog for E-mailing users their passwords in plain text when prompted online for a reminder. Their initial justification was that they store passwords ‘in a secure way’ and are ‘only copied into plain text when pasted automatically into a password reminder mail’.

If the mechanism for this exists, the passwords are not stored in a secure way. After setting it, if you’re able to get your password back, there are inherent problems in the system’s security. The obvious problem is that of transmission over an insecure channel, leaving your personal data open to people snooping on the network connection. The real issue, however, is that if the administrators of a system are able to get your password back, so can anyone. Cryptographic best practices exist to protect your data from anyone, whether it be malicious hackers or administrators of the system you intend to log in to.

I recently signed up to the Financial Times online. Shortly after creating an account, I was sent an E-mail from them telling me what my password is. EE’s twitter account today have been asking people for their passwords over direct message to assist them in their problems, and not long after, spoof accounts imitating EE were created to try to phish such details from other users too.

This behaviour is unacceptable in a time where online data surrounds our lives so abundantly. So what’s the right thing to do?

Passwords must never be stored in clear text within a database. Symmetric encryption is also not sufficient. Best practices are to apply a one way hashing function (with a random salt) to transform it into something which differs wildly even if different users register with the same password. This is helpful because if the database is compromised, it is not a trivial process to reverse the hashes to find the original data. Without the salt, a malicious hacker can precompute hashes of common passwords and compare the data for collisions. What is compared when logging in to a system are the hashes, not the passwords themselves.

The behaviour of banks may be another potential issue. Natwest asks users for certain characters of their password, coupled with a username, to log in online. Halifax, however, asks users for the entire password and specific characters of their ‘memorable information’. Halifax’s methods are more sound here, as this is clearly two factor authentication with a hashed password. The problem with Natwest’s method is that they have the ability to verify individual characters of a password. One can assume it isn’t hashed in the conventional methods described above as a result. Whether it’s actually not following best practice, we cannot be sure. 

We need to move away from a culture where high profile leaks by groups such as Anonymous are the biggest motivators of security improvements, and start ensuring systems are standards compliant across the board.

What can you do to protect yourself from appearing on lists of leaked usernames and passwords and having your accounts compromised? Don’t use the same password for different services, and use strong alphanumeric passwords which don’t resemble dictionary words where possible. If you’re so inclined, stick some dictionary words together to improve entropy. But most of all, be aware that if you can get your password back in plain text from a system you trust, stop trusting it, and keep your credentials secret.

I hope we’ll see less exploits of this nature as we implement the ‘prevention is better than cure’ philosophy into our systems, and I wish you all a cryptographically secure 56ab24c15b72a457069c5ea42fcfc640 22af645d1859cb5ca6da0c484f1f37ea 84cdc76cabf41bd7c961f6ab12f117d8.

– Alex Kara

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Happy Christmas Everyone!

Happy Christmas Everyone!

2012 has been an incredibly fantastic year. It’s Christmas. It’s also Newton’s birthday. Celebrate! ^_^

Posted in Uncategorized | Tagged | Leave a comment

Dawn of the Zero Day

It was 7:45am. I’d woken up fifteen minutes before my alarm sounded. This was a rare occurrence considering how often I’ve slept through it these days, and since I’d gone to bed sometime after 2am. Having slept uncomfortably, I checked my phone for social updates for some motivation to get up. Be careful what you wish for.

I was tagged in a post by a friend. He often posts computer science gems to keep us in the loop. This was a simple post on tumblr. And after absorbing the gravity of its message, I could stay in bed no more.

It was a description of a zero day exploit that can allow anyone to gain access to anyone else’s Skype account. Various thoughts rushed through my head. The people must be told about this. I need to protect myself against it. Where do I go from here?

Needless to say, I’d got out of bed with haste only to deadlock standing still thinking about my options. Why did I care? This was big news, and very new. The post was made a few hours before, and it was only a matter of time before it went mainstream. At that point, if I didn’t have the correct defences up, it would be endgame.

I fired up my laptop and thought about the problems Skype themselves would be having in dealing with this. Coming up with a rapid fix to such a broken feature isn’t too hard, but ensuring it doesn’t break something else is. I logged in to Skype, relieved that my account wasn’t yet hijacked in the few hours the post was up. The nature of the flaw meant I had to make some changes to my Facebook profile, closing a few doors that left me wide open. This was a case where professional paranoia was no longer paranoid.

I then spent a good 30 minutes changing details on my Skype profile. I remained dissatisfied with the process, however. There was a striking sense of instability about the whole thing, and I wasn’t convinced my data was safe.

This sparked a chain of thoughts about how I’ve been increasingly alerted to this sort of thing, and whether I would feel safe at all considering how often these things happen. The new age of zero day exploits had long since dawned, and the state of security is in such disrepair that leaks no longer effectively fix them. Is a paradigm shift in standards necessary to effect change? How can we enforce industry strength security properly? I digressed.

Deciding it wasn’t worth wasting any more time over, I figured I needed to start heading out. There wasn’t much time to enjoy my usual Starbucks breakfast, and I felt too unsettled on the tube to play Pokemon on my DS. So here I am, on the (kind of) wrong tube train, deciding what the next best courses of action are. And pondering when I’m going to feel safe enough to break the news to a wider audience against a perceived duty to inform.

UPDATE — 13:00 – As the exploit appears unusable now that Skype have disabled password resets while they find a fix, I’ve linked a source describing the methods.

Posted in Uncategorized | Leave a comment

End Of An Era

I know I’ve not posted on here for quite a while – and the excuse of being busy is starting to wear thin, but it couldn’t have been more accurate! An update on some of the significant things that have happened over the last few months:

My last post was an election manifesto, much like it was around that time the year before. Running in a student union election the second time round made a lot of things easier to anticipate, though I did not win the position of Felix Editor in the end. Since then I’ve had the chance to appreciate and reflect on the amazing time I’ve had taking up the positions of News and Science Editor, working with an extremely devoted and professional team under high calibre Editors-In-Chief, and I’ve had a fantastic experience from which I’ve learnt a great deal.

So comes the end of my time in Felix, and my time at university too. My decision to switch to the three year course (I forgot I was featured on that link, cringe..) feels like my time here at Imperial has been balanced enough to prevent me from feeling like I can’t wait to leave. I’ll definitely miss seeing my friends so frequently, the many hours I spent in the labs, and the cosmopolitanism of the campus.

The day after I found out I lost the election, I found out I’d won the Cadzow Smith award from the Worshipful Company of Engineers. This was an award for which engineering students from eleven universities in London were nominated to compete, and I (and Imperial) took the prize! I’d mentioned this in a previous post, along with the déjà vu that came with. The ceremony was held a few weeks ago in the city, where I was presented with a commemorative medal. A number of engineering traditions surrounding the Queen were upheld, and I couldn’t help but relate to some CGCU events where I’d seen similar things.

That said, it definitely needs mentioning that this year, I’ve never felt more a part of the RCSU. The committee’s endeavours for their faculty union’s students are simply unparalleled, and they’ve been instrumental in my positive experience at Imperial being sustained and ending on a high. While I may be an Engineer by degree, I’ve certainly felt like a Scientist at heart, and the committee’s warmth and amiability has been ever-pleasing. Leaving Imperial with both ICU Colours and being the first non-RCSU person to obtain RCSU Colours is heart-warming.

Another reason I may have felt like a Scientist could have been due to my final year project, which I frantically started working on whilst shutting down most social aspects in my life after the elections. My project was titled “optimised kinetic simulation of muscles”, and it was anything but as simple as it sounds. This was a joint effort from the Department of Computing and the National Heart and Lung Institute, with me in the middle under supervisors from both sides. My Computing supervisor was Professor Wayne Luk, who leads the Custom Computing research group. He encouraged me to use the Maxeler platform and FPGAs to hardware accelerate my work. My Medical supervisors were Professors Roger Woledge and Nancy Curtin, whose research revolved around muscle contraction and understanding the underlying interactions.

There were a few novel aspects to our methods which ended up with us producing measurements that were never before observed. The reason for this was that we sought to run our simulation about a million times for statistically significant results, and this would have taken about 371 days. My three-stage optimisations brought them down initially to 27 days, then to three hours, and finally to 30 minutes.

Getting the report done was a massive mountain to climb, since I’d decided to do it the correct way – using LaTeX. I found myself bonding with the fourth years in labs who were in the same physical and mental situations, and couldn’t help but appreciate that feeling of being ‘in it together‘ as we slaved for hours upon hours down there every day.

The night before the final presentation, I realised I had to change a large amount of slides since my message wasn’t coming across very well. I spent this night loaded on so much caffeine it was doing more harm than help, and was barely able to sleep. Fortunately, feedback from my supervisors after the presentation was that they couldn’t have known I had barely slept and it looked like I had practiced for weeks! Needless to say, when I got home I crashed, only to wake up the next day and fully realise the gravity of the situation – after having submitted my report and given my presentation, I was essentially free!

I later attended my final Summer Ball, got my results, and though I’m graduating in October with a 2:1 (yay!) in Computing, it certainly hasn’t felt like I’m leaving yet – my supervisors agreed for me to come back over the summer and continue working on our research – the project itself received a First Class grade and I was one of the people asked to present at the open day!

My summer work is classed as a UROP placement under my Computing supervisor, and research is progressing smoothly. We’re making steady strides with our optimisations and modifications for better accuracy. It’s essentially exploratory work and the computations serve as a guide for where to look next regarding the actomyosin interactions. We also have an expert from Maxeler on board, who happily provided me with access to their workstations at their HQ in Hammersmith, it’s all very exciting!

I think it’ll feel like I’m leaving when September draws near, the most I’ve got now is the peculiarity of being a visitor in my own labs due to a deactivated ID card. In any case, I look back on the last three years as the best time of my life so far, and I look to the future, as always, with much enthusiasm!

-Alex Kara

Posted in Uncategorized | 2 Comments

Vote Alexander Karapetian for Felix Editor!

Voting over – thanks for all the support guys! Results will be announced next week :)

Vote Now! | My Facebook event for campaigning  |  My Felix articles

Hello there! Thanks for visiting,

I’ve been working for Felix for three years now, and I love this paper. In my experience undertaking roles such as News Reporter, Copy Editor, Photographer, Web Editor, News Editor (2010-11) and Science Editor (2012-12), I’ve learnt a lot about how to keep Felix running, and I feel the time has come for me to take the reins.

I’m a third year Computing student, and I’ve always had a keen interest in writing and journalism. I’ve contributed to The Guardian, The London Student and The Medical Student before and Felix has certainly taken up most of my time during my degree. Since the day I started at Imperial, I’ve dedicated myself to staying on top of news every week. So why should you vote for me?

Transparency: Your Felix, your rules

A vote for me is a vote for openness, approachability and sociability throughout Felix. Whenever I go out representing the cat, I do my best to get people involved. Anyone should be able to contribute, and everyone should know our door’s always open. If elected, I will set up and man a Felix stall in the JCR every week to get to know our readers personally, to listen to your criticisms, and to give you a platform to approach us and get involved.

We’re not just a newspaper, but a society too. We’ve seen some incredible efforts put in this year and watching freshers, postgraduates and everyone in between become such an integral part of Felix is wonderful and needs to be continued. As your Editor, I promise not to just look after Felix, but also to do my best to keep you, the readers, happy. I will communicate transparently to help make Felix your paper, one that you can truly be proud of, and one that reflects the excellence of this university.

Stability: A bigger Hangman

We need to bring back Hangman. I’ve listened to your comments and it’s pretty clear that you want Felix to be funny. You just want it done right. We’ve learnt some valuable lessons this year, and if you elect me, I promise to triple Felix’s comedic content and launch two new Hangman subsections to give you those much-needed laughs. Satire, and I’m talking Mock The Week style repartee localised to Imperial, would feature prominently in my Felix.

We’re a student paper, not The Guardian, and you can rest assured that while I will continue to stay on top of college issues and bring you interesting features weekly, we’ll keep it as entertaining as possible for you. More comics, more wit, more investigative journalism and more opportunities for you to write in and get involved. It’s your content, done your way.

Professionalism: No more mistakes

We can do so much more with the resources we have. Felix is based near PhotoSoc in the West Basement of Beit Quad, and we’re down the corridor from Stoic TV as well as IC Radio. If elected, I will closely collaborate with the rest of the Media Group to produce more entertaining shows, as well as revamping FelixOnline to get relevant student-led videos showing up on our online articles too.

Together, we can be a very strong media outlet, and I will ensure our output resonates with your interests at the core. We will introduce ways to allow you to anonymously tip-off Felix about any potentially newsworthy material, and with our resources, we can deploy a team armed with cameras to capture the moments as they happen. If elected, I will also improve our quality control so that we catch mistakes and errors effectively. We’re all tired of seeing errors in an article’s research, spelling or grammar, and I vow to put a stop to them.

Outside the box: Keeping the Cat free

South Kensington isn’t our only campus, and I feel it’s important to ensure we’re reporting on Imperial as a whole where relevant. If elected, I promise to look into ways of distributing the paper to other campuses that do not currently receive Felix, and I will proactively look into producing a sister publication targeted at Silwood Park whilst ensuring I, Science and Phoenix are comprehensively taken care of. As Science Editor this year I’ve helped keep our section open to contributors to I, Science, and unified collaboration is key to our prosperity.

If elected, I will adhere to a rigorous schedule to keep the distribution points filled and will introduce improvements to the way we deliver to Charing Cross and the Reynolds. There’s no reason our website has to follow the same deadlines as the paper, too. I’ll increase our frontline reporting, post to the website more frequently, and as we’ve been on the same design for nearly two years now, I feel a departure from the current style will be beneficial, introducing a fresh look and feel.

Alexander Karapetian: Your Editor

Felix is an award winning publication and I have been tirelessly involved in ensuring its production to the highest standard in both content and coverage thus far. If elected, I will aspire to elevate its standard and maintain its award winning status, and your vote can make it happen. Unleash the true potential of Felix and vote Alexander Karapetian for Felix Editor. Thanks!

Join the Facebook event and invite your friends

Read my Felix articles

Vote at: www.imperialcollegeunion.org/elections by placing a 1 next to my name.

Voting is open from March 12, 00:01 to March 16, 23:59

Twitter: www.twitter.com/alexkara15

Posted in Uncategorized | Leave a comment