Cybersecurity at the Université de Rennes 1



Chargement

Cybersecurity at the Université de Rennes 1


This website was first published in 2016 and is currently being updated. In the meanwhile, you can download our 2020 cybersecurity brochure.

In 2020, we also opened in Rennes CyberSchool, the French Graduate School in Cybersecurity.

Cybersecurity is the security of information systems; that is to say almost all the devices in our environment which are equipped with a minimum computing capacity: computers, smartphones, tablets, key fobs, smart cards, connected and automated objects, etc. Cybersecurity does not simply apply to the internet.

This entire system will only be secure if all the information which passes through objects and digital networks is kept confidential, authentic and that origin and ownership is respected. Over and above data protection for all (individuals, businesses, associations and institutions), cybersecurity is also connected to cyberdefense and plays an important role in State security.

It is often said that the cybersecurity chain is as strong as its weakest link. Guaranteed cybersecurity concerns the entire digital world’s activities: equipment, software, networks and also developer and user training. All digital activities must adhere to a relevant legal framework which guarantees individual freedom and intellectual property.

The Université de Rennes 1 covers all the links of the cybersecurity chain, with her on-site partners and the Pôle d’excellence cyber. This long form website (texts, photos, videos and apps) aims to present in a simple way the research conducted on cybersecurity on the university’s campus, along with the courses offered. Enjoy!

Cyber-attacks: users are (primary) targets

IRISA director, Jean-Marc Jézéquel, reminds users that cybersecurity is a shared concern.

In reality, how can we preserve our own cybersecurity?

Serge Aumont, head of the IT Security department at the Université de Rennes 1
Serge Aumont, head of the IT Security department at the Université de Rennes 1

Serge Aumont, head of the IT Security department at the Université de Rennes 1, works alongside the University’s 31,000 users to efficiently communicate the key data security principles for the University’s community.

Serge shares his advice for all professional network users (university networks in particular).

When online, keep private and professional business separate

Use different passwords and user accounts for professional and personal use. It goes without saying that you must never choose the same password for several online services; this is an invitation to digital identity theft. If you can’t remember your numerous passwords, use a password manager (Keepass, Dashlane, 1Password, etc.).

Monitor your emails and usernames

Despite high-performance anti-spam filters, students and staff are susceptible to social engineering attacks. These are designed to tempt the user to either click on a link which will infect the device or to enter usernames on a malware (malicious software) site. At the Université de Rennes 1, almost one user per month reports a password theft. But many victims remain silent. It is important that these users report the incidents without feeling guilty: the traps are becoming increasingly clever and happen when least expected.

Use the http://haveibeenpwned.com website to check if an account tied to your e-mail has been compromised (Pwned) in the huge security breaches of the internet’s major companies such as Adobe, DropBox, etc.

Reading material: 5 things to check when you receive an email, ANSSI (National Cybersecurity Agency of France)

Keep up to date

Please note that this rule applies to any computer on which you have been granted software updating rights.

Even if you are surfing renowned and perfectly legitimate websites, your computer can become infected with a virus. These sites host advertisement banners which may sometimes spread malware: not all the administrative services which manage the banners adhere to security standards. If your browser plugins are unprotected, it can become infected with the virus as soon as the page is displayed – without even clicking on the banners. What is the weakest link? Mainly Adobe Flash which will soon be replaced by HTML5. The latter is a safer option which does not require plug-ins on modern browsers. Until Flash disappears from the web, always ensure you upload the latest available versions. If your computer is not administered by your employer, then the same rule applies to all your software, operating systems and antivirus programmes. Activate automatic updates wherever possible.

Store your professional data on professional servers

In professional networks, risks also come from equipment which users transport between their less protected private home networks and their workplace.

Today, the worst type of malware is ransomware, such as the infamous “cryptolocker” variant: once installed on the victim’s computer, it encrypts your documents and requests payment for unlocking them.

Capture d'écran d'un ransomware de type "Cryptolocker"

Like many administrations, the Université de Rennes 1 has set measures in place to prevent this sort of attack and to restore files; on the condition that the data was stored on in-house servers.

Last but not least: report any breach of security

If your password has been stolen or your computer has a virus, it is very important to report it to those in charge of the services you are using. In this way, they will be able to protect your account and also prevent further potential network and user attacks.

Links :

Forget me! Civic rights and internet giants

Authorisation and profiling

Does your smartphone use Google, Apple or Microsoft software? When you use a Google Play App Store application you are asked to grant « authorisations » to these additional programmes. This is a crucial step in the download and setup process: by accepting them you may well give access to your telephone’s geolocalisation, contacts, photos, camera and unique identifier number data.

These authorisations do not simply provide you with useful features. Their main aim is to collect data on how you use the, often free, application.

Your data is then transferred to the application operator’s computers (often hosted in another country): the « cloud ». The data is collected for many years and, together with data from millions of other people, is analysed using high-performance, statistics and mathematical tools. It is a huge and genuine cybersecurity issue.

Maryline Boizard points out: “Especially personal data which is cross-checked and aggregated from a number of sources. This gradually creates a very clear user profile.

This profile is even more personal and revealing given that it combines important, private data: your social network friends, your heart rate or the number of daily steps you take if you wear a connected fitness sensor, how you drive and your preferred routes if your car has a GPS.

“If we consider that profiling is an intrusion to user privacy, then this could lead to situations of abuse” says Maryline Boizard.

Today, for example, insurance companies are highly interested in connected personal objects and connected vehicles. Some insurance companies already offer bonuses or free connected objects to those customers who accept to share their data. These companies currently reward careful driving and regular exercise. What, we might ask, will they reward in the future? It seems that we are on the brink of being subjected to differentiated insurance bonuses based on mandatory insuree behaviour monitoring.

Economic model and use of personal data

The « totally free with ads » option is now the internet norm. If online services are to increase their users, they have to be, at least in terms of their basic version, interesting and free. Online service operators, however, have to make a profit from said services and they mostly do so through user- targeted advertising.

User profiles: a veritable goldmine for online operators

From an investor point of view, profile portfolios are the real value of any online business. Even if we benefit from a free application, in reality we pay for it as our personal data is shared with the application’s owners. As the saying goes, « If you’re not paying for it, you become the product ». This economic model dates back to the beginnings of commercial internet.

Imbalance

For Maryline Boizard: “A major legal concern is that some applications provide a minimal service in return for a huge amount of collected data.”

There are « flashlight » applications which, by pressing a button, permanently activate a smartphone’s flash. This is indeed useful indeed if you are in the dark. But some of these applications request access to telephone contacts, device ID and even geolocalisation. In this example, the ratio between services rendered and the cost of providing personal data seems imbalanced.

However, there is no applicable legal text to regulate this imbalance. Lawyers have a vast amount of work ahead in terms of improving user protection measures.

The right to be forgotten

The « right to be forgotten », created as an attempt to set the balance straight, is a battling ground between lawyers and internet giants. For example, do internet users have the legitimate right to remove all their information found via Google search? The CNIL (National Commission on Informatics and Liberty) is of this opinion and sentenced Google to a €100,000 fine in March 2016. You may recall that Google accepted to delete some of its research results which could have prejudiced internet users: the search engine now provides a right-to-be-forgotten form. However, the data was only deleted on the French Google platform and its other sites, but on the condition that the request came from France. An internet user in America can still access the deleted results. Google appealed the CNIL’s verdict and argued that French law cannot be applied overseas. To be followed…

Over and above the right to be forgotten is the ever-present question of profiling. If we could completely delete our online trace, and all access to our personal information, our data would have guaranteed limited commercial use. But this is not the case today, as, by ticking the « I agree to the general terms and conditions » box, we digitally sign an irreversible disposal agreement for an unspecified duration.

Manage profiling

Maryline Boizard is currently working on a profiling research project with IRISA (Institut de recherche en informatique et systèmes aléatoires – joint research centre for Informatics, including Robotics and Image and Signal Processing) computer engineers. The project was launched in June 2016.

The lawyers involved in this project are working to provide an overview of the current legislation in order to gather current user protection information. The researchers will then check if new, planned measures are legally and technically adequate: there is no point in legal protection being available to individuals if it cannot be technically implemented.

For example, there is a law which states that individuals may maintain full control over their private data (this is known as digital self-determination). Can this law be enforced on the operator? In other words, can internet users request that all their data be deleted from a social network, for example? If so, how can this be guaranteed, given that today, data can only be fully controlled if it remains stored on one single computer? The legislator will be able to impose cut-off rules even if they are difficult to guarantee.

Simplify general terms

An experience has just been carried out in Norway during which the general terms and conditions of 33 of the country’s most common smartphone applications were read out loud. The combined documents were longer than the New Testament and it took the brave volunteers more than 30 hours to read to the end!

However, digital economy development is a question of trust: the general terms and conditions need to be simplified and harmonized across the globe. In this way, users will be correctly informed as to how their data will be used and can therefore accept the conditions in full knowledge of the facts.

Create a protection tool

“Sociologist Catherine Lejealle’s research on internet users at the ESG Management School in Paris reveals an element of fatalism”, says Maryline Boizard. “As we are used to the prevailing internet economic model – where we pay for a small service in exchange for a large amount of our personal data – we take it to be a given. The right to forget protection measures should therefore be as simple as possible and easy to implement.”

Another internet?

Are there other solutions? Some people feel that a totally different internet economic model is called for: services should be purchased in exchange for guaranteed user-data confidentiality. The inconvenience is that only those with the financial wherewithal would be protected and this would breach the principle of Net Neutrality.

Maryline Boizard concludes that: “Everyone must pitch in equally if we are to change the current model to create a model for all.”

Links :

Maryline Boizard

Maryline Boizard is a private law lecturer at the Université de Rennes 1 and an Institut de l’Ouest – Droit et Europe (IODE – Western Institute of European Law) researcher. In March 2015, this joint research unit organised a scientific symposium on the right to be forgotten. Maryline Boizard’s work focuses mainly on economic operator civil liability, the right to be forgotten and citizens’ basic rights and liberties in terms of cyber activities.
The lecturer-researcher has launched cross-curricular user-profiling research projects with the IRISA, DRUID, DIVERSE and CIDRE teams.
Maryline Boizard’s personal research focuses on Internet intermediary liability, especially for research engine suppliers.
Are you unique? Smile, you are being profiled!

Did you know that some of the internet sites you visit can immediately recognise your browser and successfully identify you almost every time? This happens as soon as you click on their page, without you even being aware of it.

Cookie warning: a useful, but already outdated, anti-tracking method

Since 2014, on all websites you visit in France, you have probably noticed a message informing you that the sites uses « cookies » and that in continuing to browse the sites means that you will accept the cookies on your computer.

These cookies are trackers. Your computer stores small pieces of data from the website as you browse. This means that you, and your preferences, are identified once you reconnect to the website.

In order to help ensure user privacy, the French CNIL, in adherence with a European directive, has made it mandatory for websites to alert users that they use cookies. Users are therefore fully aware of the use of cookies.

Today, however, browsers can be successfully identified along with the machines which run them, without needing to analyse cookies.

Why track internet users?

Internet service operators need to provide extremely user-friendly offers; they carefully design applications and website interfaces and also try to give the impression that there is a personal relationship between the user and the service. Their main aim is to create precise internet user profiles and use them so that the latter can find their data and settings without even needing to identify themselves. Users are naturally prone to be loyal to user-friendly websites.

Providing successful internet services is a costly business. However, one method which operators use to develop quickly is to provide free access to their services; at least to the basic version. Given these conditions, if the latter are successful, it is in their owners’ best interests to have planned to finance thousands of server computers, build a data centre, recruit staff and provide internet connections strong enough to instantly reply to millions of requests.

Profile portfolios

Internet service operators build service-user profiles in order to finance their activity and demonstrate their cost-effectiveness. They can now easily identify users and record their site usage, preferences and data. It goes without saying that this profile is then used to post personalised adverts which the user will see on the free website. Recent studies show, however, that targeted advertising profits are only one part of expected economic spinoffs. The business’ value pillar is the user profile portfolio itself. In this way, certain websites can now profile internet users even if they have not created an account on their website.

Are you unique and, therefore, easy to profile?

The first step to creating internet user profiles, without asking users to identify themselves, is to uniquely identify which browser has been used to access the website. This is what cookies are used for but IT progress has led to much more discreet methods.

Pierre Laperdrix programmed the AmIUnique.org site and is currently completing his thesis under the supervision of B. Baudry.
Pierre Laperdrix programmed the AmIUnique.org site and is currently completing his thesis under the supervision of B. Baudry.

AmIunique.org is a very easy website to use. It is a research tool accessible to all internet users wishing to check, in English or in French, if their browser’s footprint is unique amongst the 230,000 examples already listed on the website. If this is the case, the website will have proven that a unique footprint can be used to profile a particular browser, and also the people who use it. All this information can be found out without even using cookies; and the user is none the wiser.

Before clicking on the website, please note that:

  • there is a very high probability that your browser footprint is unique. Even if you think that you use a very popular programme and operating system combination, the differences in computer set up, versions, extensions and fonts create subtle differences which are picked up by detection tools;
  • there is no means of stopping the profiling; it operates without your knowledge simply because you clicked on a website which is equipped with these tools;
  • AmIunique.org will not, of course, use your footprint to track you online. Once your data has been analysed, you will see the results of the different tests which were carried out on your browser. You will also find out your position in relation to all the collected footprints.

So, are you unique? Take the test! (below)

capture-aiunique

Who is behind AmIunique.org?

This research and educational website was created by Pierre Laperdrix, a second-year thesis student at INSA (National Institutes of Applied Sciences). Pierre is specialising in browser profiling and is a member of the IRISA DiverSE team (a joint CNRS – French National Centre for Scientific Research – , Université de Rennes 1, Inria and INSA team). Pierre’s work focuses on IT security and confidentiality and also software engineering.

Benoît Baudry, the DiverSe team director, is a researcher at Inria and a software design, test and analysis expert. His work stems primarily from empirical software observations with the aim of improving quality. His research is based on software tests, model-driven engineering and software metrics and he recently diversified to study automated software. Benoît Baudry is the DIVERSIFY project manager, a European project which focuses primarily on this topic.

Why was AmIUnique.org created?

“The aim is to gather a database large enough to study real-time computer system variabilities. The more our website is used, the more representative the database will be”, explains Pierre Laperdrix.

Why create this database? “Simply because we want to overrule discreet and powerfully effective profiling”, says Benoît Baudry.

Scientists are working on two crucial elements in order to achieve this.

Random browser environment for each use

In order to change your browser’s footprint, scientists aim to create an automatic and subtle variation in your browser environment each time you open it. In the long run, if you use the system, several browsers, plug-ins and font types and versions will be installed on your computer. Instead of just simply opening a programme, you will launch an entire environment made up of random elements. The browser will also be reassembled in a haphazard fashion so that the rendered modules will change from one visit to another.

Defeat internet predators with imitation

“If the variations in synthetic footprints are to be credible, they have to be able to imitate « real life » internet variations”, continues Benoît Baudry.“This condition has to be met if our system is to go completely unnoticed.”

It is for this reason that the AmIUnique.org website database was created. The website’s development is also part of DIVERSIFY, a European research project managed by Benoît Baudry. The project’s team is composed of computer engineers and ecologists who are specialised in evolution.

Together, we are trying to apply original biodiversity processes to IT. The aim is to reproduce the same processes in the software sector”, says the researcher.

This system is intended to thwart most footprint detection tools used by service operators because for them, you will appear to be a new user each time you connect. This will only work for websites on which you do not clearly identify yourself.

If you wish to help the researchers create a leading anti-profiling system, click on AmIUnique.org: test your footprint and ask your friends to do the same.

The more footprints we collect, the better it will work! » Pierre Laperdrix”, conclut Pierre Laperdrix.

It’s your turn!

Hoax detection on Twitter
Cédric Maigrot and Ewa Kijak

How would Twitter survive without the retweet button? This feature is one of Twitter’s reasons for success. In just one click, you become an inspiring source of information in the eyes of your followers. If you are on the ball, retweet quickly and post interesting comments, you will rapidly double your total followers.

But do you always take the time to check a post before you retweet it? The 140 characters in a tweet and the related information (link, photo/video, localisation) are only an unfounded statement. A tweet can only be proven through checking with reliable sources.

If, as soon as you read a tweet for the first time, you knew that the information you were about to share was false and even malicious, would you retweet it?

Detecting hoaxes: a major security issue for the general public

As previously mentioned, detecting false information has become, in certain circumstances, strategically important to national rescue services; malicious Twitter accounts were used during the Brussels attacks to add to the general panic by announcing that a hospital for attack victims was being evacuated following a bomb alert.

This is one of the reasons why Cédric Maigrot’s thesis is not only being funded by the Université de Rennes 1, but also by the DGA (Direction générale de l’armement – French Government Defense procurement and technology agency).

Multi-criteria analysis and confidence index

The young scientist is a PhD student at IRISA and is working on developing an automated analysis tool under the supervision of Ewa Kijak, a lecturer at the Université de Rennes 1, and Vincent Claveau, a research CNRS research director. The objective is to obtain a reliability index for a given tweet in near-real time. The tool combines several methods:

  • image analysis to detect image editing;
  • textual analysis (coherence, study of names, places, emoticons, use of pronouns and punctuation style);
  • context check (is the chosen image used elsewhere?), is the tweet’s author known in relation to the message content?
  • message issuer account characteristics (followers and subscriptions, number of tweets, account creation date and also number of retweets and favourites).

A reliability index is calculated based on these criteria and will pop up as soon as you check a tweet. It will guide you as you click on the retweet button.

The tool won’t be infallible, of course: false information shared by presumably reliable channels will upset the balance. Also, the range of opinions about the same topic cannot be taken into account. Ultimately, the decision whether to retweet or not is up to you”, says Cédric Maigrot.

A tool for journalists

Vincent Claveau co-supervises Cédric’s thesis: “This research falls into the current trend of « fact-checking » and is now necessary given the accelerated and widespread news feeds which are accessible to all. For example, Le Monde’s « Les Décodeurs », helps and reminds the general public to check what is published on the internet and elsewhere. The press is closely following our tool’s development.”.

Skills at stake

The IRISA Linkmedia team has researched and developed concrete research skills in large databases, automatic image and language processing.

Linkmedia has perfected audio, video and textual analysis tools. One Linkmedia creation can analyse television news, detect names (people, places, etc.) and launch searches on a real person. The tool also indexes thematic reports. This means that, experts who make regular guest appearances on news programmes can be followed.

Perspectives

Cédric Maigrot’s fact-checking tool currently works with the help of existing websites (Twitter and Facebook programming libraries, for example, Google use and Hoaxbuster base, etc.). The team firstly focused on integrating this information and calculating the reliability index. It was no easy task to combine information from such a wide variety of sources.

The second step will be to develop new features for the tool, such as the use of learning methods and dynamic qualification of the reliability of internet information” says Ewa Kijak.

Cédric Maigrot plans to take part in MediaEval, a European research workshop which will challenge teams to automatically label a tweet « true » or « false » by using a group of messages which have already been correctly identified as a learning base. The objective will obviously be to determine the most fitting predictions for tweets of unknown reliability.

Cedric will also study the connections between sources in order to detect those which only quote each other in a vacuum: the reliability of a tweet from this circle is questionable”, says Ewa Kijak.

To conclude, in the learning-base sector, current research trends are using large networks of virtual neurones which can handle vast amounts of information and which have produced excellent results for different problems.

We are currently studying how artificial intelligence tools could be useful to our work”, concludes Ewa Kijak.

Ewa Kijak co-encadre la thèse de Cédric Maigrot. Ensemble, ils développent un outil de détection automatique des fausses informations sur les réseaux sociaux.

Links :

https://www.irisa.fr/fr/equipes/linkmedia
http://www-linkmedia.irisa.fr/
http://www.hoaxbuster.com/
Les Décodeurs du journal Le Monde

Ewa Kijak

Ewa Kijak is a lecturer at the Université de Rennes 1 and researches image description for indexing and information searches in the IRISA Linkmedia team. Ewa Kijak’s lectures are mainly based on image handling and artificial learning as part of the « Digital Image » cursus at ESIR (École supérieure d’ingénieurs de Rennes – Rennes Engineering College, part of the Université de Rennes 1). She also teaches artificial learning and image database indexing for various Master qualifications.

Vincent Claveau

Vincent Claveau is an IT researcher at the CNRS and a member of the IRISA laboratory in Rennes. His areas of research focus on automatic language processing, text mining and information searches.

Cédric Maigrot

Cédric Maigrot is a PhD student at the Université de Rennes 1.
Cedric completed a research-oriented internship at the Montpellier Laboratory of Computer Science, Robotics, and Microelectronics before starting his thesis at the Université de Rennes 1. The aim of the internship was to predict behavioural changes in people who are susceptible to commit suicide by studying their Facebook posts. Cédric Maigrot wanted to continue studying this automatic approach to social networks and is interested in the direct application of the detection of false information to real data.
The EMSEC team: IT security experts

In February 2016 in Rennes, the IRISA created a new team, EMSEC (Embedded Security and Cryptography), which is specialised in embedded system security.

Pierre-Alain Fouque and Gildas Avoine co-manage the new team and are both Institut universitaire de France members.

“’Embedded systems’ are all types of devices which are inserted in mobile objects of all sizes, from smart cards to means of transport. They all contain electronic parts which stock and handle information which may, or may not, be of a sensitive nature” explains Pierre-Alain Fouque, Université de Rennes 1 lecturer.

“By security we mean the means to maintain integrity, authenticity and when necessary, data confidentiality” Gildas Avoine, INSA Rennes teacher.

Security

EMSEC deals with security in the broadest sense and has developed the following techniques:

  • risk analysis, developed at EMSEC by Barbara Kordy is based on attack trees. It studies the different types of attack on a system (public services, for example) and the measures to take to avoid them;
  • embedded programme traceability risks, in collaboration with the DiverSE team managed by Benoît Baudry, aims to protect internet users from automated and non-consensual profiling;
  • microchip analysis; these are found in everyday necessary items (bank cards, transport passes, car keys, passports, access badges) but are protected at a low level;
  • telephone authentication when the latter connect to operator networks with their SIM card

EMSEC does not differentiate between equipment and software aspects because software attacks may be carried out both via equipment and vice-versa: the used keys may be captured by listening to indirect signals generated during an encryption operation (changes in electricity consummation and electromagnetic noise, etc.).

Cryptology

In most of the cases studied by EMSEC, information is digitally processed. Electronic encryption studies on transmissions and data is, therefore, one of the team’s core concerns.

Cryptology is the science of information security where data is protected through « encryption », thus ensuring that it will no longer be possible to find the slightest information about it. Cryptography then deals with encryption algorithms and cryptanalysis which analyses possible attacks on existing models”, explains Adeline Langlois CNRS head of research and EMSEC member.

Adeline Langlois is a mathematician and is specialised in Euclidean-based cryptography networks. The robustness of this tool appears highly promising in response to cryptanalysis technique developments.

“Crypto is fun” : give cryptography and cryptanalysis a go with Julius Caesar!

Cryptanalysis: strengthen cybersecurity through attacks

It has to be assumed that an encryption algorithm used in a real-life situation (to secure your online bank connection, for example) will be constantly attacked by people trying to pirate the system for their own means.

Cryptanalysts therefore permanently attack encryption algorithms. They do so in order to be the first to detect possible weaknesses in the latter and to therefore attempt to correct them, when possible, before harm is done.

There are actual cryptosystem attack contests”, Patrick Derbez, lecturer, Université de Rennes 1. “The aim of the CESAR project, for example, is to devise authentic encryption systems which cryptanalysts then try to break. One of the objectives is to create the strongest algorithms.

There will never be 100% security

The security which results from an encryption system will never be 100% complete”, warns Pierre-Alain Fouque. “The largest weaknesses arise from how it is used in systems and by users.

As for the algorithm itself, its level of security is measured in the level of difficulty needed to inverse the mathematical calculations required to encrypt the original data. This is usually a question of calculation time and available computer memory. However, what is impossible for computers today will most probably be possible in the future. Take for example the defeat of the DES algorithm in 1999 when it was replaced in 2001 by AES/Rijndael. The latter is still used today (2016).

There are several methods in breaking an encryption algorithm: from pure « attack » mathematics to side-channel attacks which disturb the equipment and weaken the software. These attempts provide extra information about the internal workings of encryption algorithms.

Weaken encryptions, block by block

AES is the 2001 standardised algorithm which, 15 years on, still secures a vast percent of online activity and much more. If it was overturned, without back up measures, the consequences for user privacy and the economy would be catastrophic.

This is one of the reasons why Pierre-Alain Fouque, EMSEC co-director, and Patrick Derbez generated AES attacks. They demonstrated that, without breaking the algorithm, extra information about its internal workings could be obtained by disrupting an electronic circuit with a laser as it operated.

AES, and block encryption systems in general, are symmetric cryptography algorithms: the same key is used to encrypt and decrypt the message. In order to encrypt content, AES needs to « turn » the key several times and use a sub-key from the previous turn with each new turn. If the encryption is stopped before a certain number of attempts has been reached, the whole system is weakened and the original keys, which are the keepers of the encrypted message, are more accessible.

All in all, given today’s means, AES will not be broken from a mathematical point of view”, says Pierre-Alain Fouque. “It is the system set up itself which may be vulnerable.”.

Quantum possibilities

Cryptology will be revolutionised by quantum computers: calculations which take an infinite amount of time on regular computers today will be rapidly solved on quantum machines. The answer will be in the development of more cryptographic techniques based on error- correcting codes and Euclidean networks (the latter are Adeline Langlois’ speciality within the EMSEC team).

Even though the team is very recent (its current members were recruited between 2012 and 2015) EMSEC has published and received awards for many articles in the most prestigious international cryptology symposiums: ASIACRYPT, EUROCRYPT, CRYPTO and ACNS, etc.

For example, during the 2015 Asiacrypt conference, an EMSEC member, Pierre Karpman (PhD student co-supervised by Pierre-Alain Fouque) and Adeline Langlois were individually rewarded for their work. Pierre Karpman proved a weakness in the SHA-1 feature and Adeline Langlois presented an Euclidean-network based cryptographical improvement.

5 des membres permanents d'EMSEC. De g. à dr. : P. Derbez, G. Avoine, A. Roux-Langlois, B. Kordy et P.-A. Fouque
5 of the permanent EMSEC members From left to right: P. Derbez, G. Avoine, A. Roux-Langlois, B. Kordy and P.-A. Fouque

On 1 September 2016, Stéphanie Delaune, CNRS head of research joined EMSEC’s permanent members. Thanks to Stéphanie, EMSEC was awarded an ERC starting grant through IRISA and Région Bretagne support.
There are also more than ten non-permanent EMSEC members (PhD students, post-PhD students and visiting scientists).

Crypto's fun: give it a go!

You need a key to encrypt a message. The most famous example dates back to Julius Caesar; he used a very simple system to communicate with his armies. Why not test the algorithm yourself and then try to unravel a secret message which has been encrypted with this method?

Just follow Patrick Derbez and Adeline Langlois' (EMSEC team researchers) instructions.

To start, encrypt this message:

"CESAR" with a shift (key) of 4



Congratulations! Then try to decrypt this one:

.

The shift was of .
Congratulations!
No public key: no (online) privacy

Two safety boxes, two keys and one lock: the workings of public key encryption explained in 10 minutes

Sylvain Duquesne uses two small safety boxes, two keys and one lock to explain cryptography to his Masters in Cryptography students at the Université de Rennes 1.

Let's say that you are taking your lunch break at work and that you want to transmit your medical records to your new doctor. As this is private information, you do not want your employer to be able to access it. There is a chance that this could happen as the IT network you use belongs to your employer.

Therefore, the data which is transferred between your office computer and your doctor's computer has to be encrypted. Once encryption has taken place, if a third person (let's call your employer "Eve") hijacks your internet connection and reads your messages, the latter will only be able to see a row of meaningless symbols, for example, “fjEdOoSqz#9&Udo”6ihLac3”.

That said, let us take a moment to think: if two people are to communicate using encrypted messages, then they need to firstly agree on a joint encryption key. You will have seen this when you tried the Caesar encryption.

But how can you and your doctor use the encryption key without Eve being able to see it? You might have previously given your doctor the key during an appointment at the medical practice, but this would not be practical. It would even be absurd: no one goes to Facebook's head office to safely connect to their account!

There is however, a solution. Do you know what it is? It was only discovered in the 70s by W. Diffie and M. Helmann (and undoubtedly slightly before by the British secret services).

To begin with, it must be noted that a Caesar-type encryption scheme is symmetrical: the same key is used to encrypt and decrypt a message. This key is, therefore, very easy to use but cannot be used on an unsecured connection as Eve could access and copy it during the transfer.

Diffie and Helmann created an asymmetrical system. Let's put it into practice. You have told your doctor that you will send him private information. Your doctor has created a keyring for two keys - one public and one secret - on his computer. He replies by sending you his public key, not a symmetrical key. The public key will enable you to encrypt the information you send him but you will not be able to decrypt it. Your doctor will be able to decrypt the information with his private, secret key. Once he receives your information, which has been encrypted with his public key, your doctor uses his private key to decrypt it.

There is one disadvantage to the asymmetrical key: the calculations which both of your computers need to make are too lengthy for transferring large volumes of data (your medical records) in an acceptable length of time. So, the solution is to only use asymmetrical encryption to exchange a symmetrical key (only much more robust than a Caesar-type key) with your doctor. You can then use it to send your files without Eve, your employer, being able to access or copy them.

The maths behind public keys

This achievement has been made possible by modular arithmetic tools which are explained in detail by Burt Kaliski from the RSA Laboratories.

W. Diffie and M. Helmann created an asymmetrical public key system but they did not provide, at that time, an associated encryption system. This was developed by R. Rivest, A. Shamir and L. Adleman. They called their algorithm "RSA" after their surname initials. It is still used today for secure online transactions”, explains Sylvain Duquesne.

Breaking cryptology

RSA reliability also depends on the difficulty of solving certain extremely complicated mathematical problems (large integer factorization). All RSA-encrypted data will only be potentially accessible when this problem can be quickly and efficiently solved. Researchers are studying other possibilities, such as using elliptic curves; these are thought to be even more reliable. Sylvain Duquesne is working on this topic.

Interception

Just say that Eve intercepts the public key, which your doctor sent you, and replaces it with her own key in the message which you are about to receive. This will affect the entire system. The current trend is to use certificates issued by a recognized body; this aims to guarantee that the public key which you receive is your doctor's key.

"Certificate" is a term which you will come across in your browser's security parameters. When you contact your online bank, your computer uses a certificate to check that you are using the public key issued by your bank, rather than a pirated key by a fraudster who is trying to steal your money. This system is not infallible as it depends on the certification's authenticity; the latter may also be a victim of fraud. However, it reduces fraud-risk levels to a manageable level.

What about quantum?

It is often said that the arrival of quantum computers will put an end to traditional cryptography.

For Sylvain Duquesne, “The solution is to use correcting codes and Euclidean networks. But, computers and quantum cryptography are not the same thing. The latter already exists and works on an experimental level. Keys can be transferred using quantum systems. As quantum states change when they are observed, each interference in the process is recorded. Eve can no longer interfere without being seen.

Sylvain Duquesne

Sylvain Duquesne is a maths professor at the Université de Rennes 1. He is an IRMAR researcher and is specialised in the mathematical foundations of cryptology, especially elliptic curves and their generalisation. He carries out calculations on abelian varieties and studies their application to cryptology. Sylvain Duquesne teaches Masters students (1st and 2nd year) fundamental cryptology theories and their efficient use in software to avoid attacks. The aim is to develop students' ability to adapt. It must be noted that, where cryptography is concerned, the tools and methods which are taught are obsolete after two years. As future cryptology professionals, students must be able to very quickly detect, understand and master both theoretical and technical changes. Students use their acquired skills during internships with Amossys, Orange, Sagem, Airbus or the Ministry of Defence.
Side-channel attack: a way to snoop into your smartphone and bank card

Let's start with a brain-teaser

Imagine yourself in a room with three numbered switches. The switches are at the "off" position. A hermetically sealed, but unlocked, door separates you from another room in which there is a lightbulb.

You have to work out which of the three switches in the first room controls the lightbulb but you can only enter the second room once to see if the lightbulb has been switched on. How can you work out which switch works the lightbulb?

Answer

Turn on switch n°1 and wait for 5 minutes, then turn it off and switch on n°2. Go into the second room:

  • if the lightbulb is on, it is controlled by switch n°2;
  • if it is off and cold to the touch, it is controlled by switch n°3;
  • if it is off and warm to the touch, it is controlled by switch n°1.

Did you think of touching the lightbulb and using its heat to find the answer?

This brain-teaser helps us to understand Hélène Le Bouder's work as a post-PhD student at the Rennes High-Security Laboratory of Inria. Use this brain-teaser to:

  • entertain your friends;
  • understand that electronic circuits can "leak" and be attacked by the "side-channel attack" method.

Using physics to rescue cryptanalysis

As Hélène Le Bouder explains, “if you didn't know the answer to the previous brain teaser you probably firstly tried to solve it using logic only; that is to say, the switches' on/off possibilities. It goes without saying that this attempt will not have been successful.

This is more or less what happens when someone wants to decrypt the contents of an electronic circuit, in a smartphone for example, by attacking it using a mathematical technique only. The latest generations of these circuits are designed to enable your data to be fully encrypted. If you have enabled this protection and your device is stolen, for example, then your photos, contacts and messages, etc. will be very difficult to access.

Héléne Le Bouder goes on to say that “AES is one of the most frequently used algorithms in circuits and is used to transform data into an incomprehensible mumbo-jumbo by manipulating it with a digital key. The data can then be retrieved if the same key is known. AES is renowned for its reliability: the data it protects takes far too long to decrypt using current means. Unless there is a bug in the circuit which would weaken AES (this frequently occurs), the attack is too costly in this respect.

In order to solve the lightbulb brain-teaser, you have to completely ignore logic: you have to touch the lightbulb to measure its heat. This is a physical phenomena and the result of its activation by one of the switches. Likewise, scientists have measured electromagnetic "leaks" during a calculation and the resulting energy consumption of an encryption processor when it processes data-protection keys. This is how physics helps cryptanalysis.

To carry out this attack, an antenna needs to be placed near the circuit and the attacker must know exactly what to "listen" to. Hélène Le Bouder provides a demonstration on a test circuit.

Example of a side-channel attack

AES is symmetrical: the same key is used to encrypt and decrypt data. The algorithm works using several turns of a key (up to 16). A sub-key from the main key is used at each turn. One turn of a key is made up of 16 bytes (character). Each byte can have a maximum of 256 different values.

Isolate one of the key's characters (bytes)

The secret behind side-channel attacks is to study the moment when one particular byte is processed. A probe measures electromagnetic radiation which represents the processor's precise energy consumption (transistors).

Simulate possible values

Furthermore, by using mathematical systems, an approximative circuit consumption curve simulation can be provided for the 256 possible key bytes.

Correlate

“By comparing the measurement to these 256 curves we can obtain which simulated curve is the right one. Then we apply the same process to all the key's bytes”, explains Hélène Le Bouder

It is also possible to use other weaknesses: a difficult calculation will take longer than other, simpler, calculations and use up more energy. Measuring calculation and consumption times are the main means of side-channel attacks.

How does this apply to real life?

This very simplified demonstration was carried out on a test microcontroller which is not specialised in security. On the one hand, the experiment which was presented needs to control the AES processing on the circuit: the programme was added by Hélène Le Bouder herself.

This experiment proves that encryption algorithms must only be implemented to circuits by strictly following tried and tested security guidelines, because mistakes at this stage greatly weaken the cybersecurity chain.

How can security be increased? On a software level, it is always possible to ensure that the key's bytes are not processed in order and to carry out useless calculations to confuse possible spying. But the best protection method is to create security-dedicated circuits, as the levels of protection are integrated right from the outset, on an equipment level.

However, it has to be remembered that, as with all cryptology sectors, the measures and counter measures develop jointly in an endless competition: specialists will find a weakness for each protection inserted in a circuit. This weakness will be corrected in the following version which will in turn be "broken" by cryptanalysts, and so on”, concludes Hélène Le Bouder

Links:

Hélène Le Bouder

Hélène Le Bouder has a PhD in cryptology with a specialisation in physical attacks. She is currently a postdoctoral researcher at the Rennes-Bretagne Atlantique Inria High-Security Laboratory within the Pôle d’excellence cyber. Her work includes:
  • the study of polymorphic codes to resist physical attacks;
  • the implementation of an electromagnectic curve database;
  • the development of observed attack archives (CPA, DPA, Template etc).
Lasers and radio waves: disrupting encryption circuits

The IETR "Cybersecurity platform"

A platform project at the Rennes Institute of Electronics and Telecommunications (IETR) has just received its first funding grant. This equipment will be dedicated to analysing electronic systems' vulnerability to security attacks.

We will analyse the reliability of secured electronic circuits which are specialised in cryptographical operations when faced with external optical or electromagnetic attacks. Such attacks can create circuit failure and weaken the attacked systems' protection” says Laurent Pichon who is in charge of the section of the future platform which is under optical (laser irradiation) and microelectronic attack.

The study of electric and electromagnetic behaviour, which also uses failures as a mine of information, can assist in finding the encryption key for the necessary information.

In a failure, an intermediary step in the operation becomes visible; this weakens encryption. For example, the AES algorithm, which is often used on internet, has many successive steps which, if they are correctly processed, result in a highly encrypted message. On the other hand, if the operation is interrupted at the right moment, it is then possible to recuperate vital information about the encryption key”, explains Laurent Pichon.

The platform will combine the skills of two IETR departments: the department of Microelectronics and Microsensors and the department of Antenna and Microwave Appliances. The first department will study cryptographical circuit disruption by laser irradiation. The second department will work on electromagnetic disruptions.

Laser disruptions

A specific experimental platform combining a pulsed laser and a dedicated measurement system will record the components' electric activity. The platform will be installed in the IETR's buildings; more specifically in a "grey" room with filtered air to reduce dust to a minimum and an atmosphere conducive to obtaining reliable measurements.

We will try to detect the disruptions which are created by a laser beam on a group of transistors (reversers). This will be picked up, amongst others, by listening to induced background electronic noise. This innovative method has been scarcely used until recently”, says Laurent Pichon.

Studies will be based on industrial components and electronic systems made for secured or military applications. They will then be applied to all future generation microprocessors and cutting-edge technological circuits.

The aim of this platform will also be to help manufacturers, in the long-term, to create laser-irradiation resistant components. The platform will be located within the department of Microelectronics and will be generally dedicated to measurements, including cybersecurity dimensions. The platform will lead to new research in the component measurement and metrology within electronic systems' at the junction between technology and microelectronic circuits.

Disruption by radio frequency

This other experimental platform is also part of the IETR but is located in the INSA buildings. It will concentrate on another electromagnetic topic: radio frequency (RF).

Both platforms will be studying very varied wavelength ranges. The laser will attack the very small microelectronic scales whilst the RF will take an entire component or appliance into consideration, especially its access channels (power, connections).

The aim of this RF platform is also to obtain erratic electronic operations which could be interpreted in an attempt to find the encryption keys.

INSA's echo chamber is operational and will be used to study echo phenomena. This could result in the identification of electromagnetic coupling scenarios to which the appliance is sensitive.

The aim, besides the cryptanalytic aspect, is also to try to disrupt the system so as to make it unavailable during the disruption. In fact, security is not only a question of being able to send guaranteed, safe information, it is also about being able to send it at a chosen moment. Security is instantly compromised if the system no longer replies; an autonomous car for example”, says Philippe Besnier, RF Pole project manager.

Of course, studies such as these are carried out on a large-scale within the military sector. The IETR platform does not aim to use vast means but to ascertain if basic attacks carried out with modest means on public systems could generate critical consequences.

Furthermore, if it is obvious that the tested systems need protection, even before this step, then they must be installed with a warning system. They must be able to detect intrusion. Manufacturers will have to devise warning solutions. But the platform will be particularly efficient in this area due to articulation of laser and RF”, concludes Philippe Besnier.

Links:

Laurent Pichon and Philippe Besnier

Laurent Pichon is a professor of Electronics at the Université de Rennes 1 within the GEII department, Rennes IUT. He teaches Master 2 level electronics, component physics and nanotechnology. He is a researcher within the Rennes Institute of Electronics and Telecommunications (IETR) Microelectronics and Microsensors department. The latter focus on making and characterising semi-component equipment based on silicon and associated electronic components. Philippe Besnier is head of research at the CNRS within the IETR Antenna and Microwave Appliances department. His work is based in electromagnetic compatibility (or how to reduce interference between devices using radio waves). He is specialised in echo chambers, near field and simulating interactions on complex systems.
Espions intimes : l'internet des objets

There are many very trendy objects which are designed to be user- friendly on a daily level: watches, fitness sensors attached to a smartphone, and also connected electricity, gas and water meters, etc. These objects are becoming increasingly more central to our private lives.

However, the close analysis of our physical performance or our electricity consumption does not take place within the connected object and rarely by the computers or smartphones which we more or less control.

These connected objects have at least two things in common:

  • they produce a series of continuous data (this is known as time series data);
  • they send this data to be processed by gigantic data farms, which are often located abroad and controlled in the infamous cloud.

For users, the advantages of these systems is two-fold:

  • on a personal level, they receive usage updates (physical fitness and exercise statistics, household electricity consumption, etc.);
  • on a larger scale, they combine millions of subscribers' data to create user profiles which then enable users to position themselves and even influence their usage (my apartment uses up as much energy as a house, how can I fix this?).

But how will the very personal data, which these objects collect, be used? Biomedical data is indeed of a very personal nature (for example, our heart beats) but household electricity consumption can also reveal vast amounts of information about its inhabitants. When the latter is measured very precisely (for example, per minute), the various electrical devices within the home, such as toasters, kettles and washing machines generate an almost unique consumption profile. Besides knowing when homeowners are present or absent from their homes, household electrical consumption analysis can therefore provide information about the number of occupants, household composition, religious practices (reduced electrical activity during the Jewish Sabbath or delayed during the Muslim Ramadan) and even state of health (medical beds for example).

In these circumstances, a relationship of trust between the service supplier and the client is crucial especially given that these type of applications are set to massively increase and are lead to the advent of the "internet of things (IoT)".

Clearly, processing data such as this in the cloud is an issue as the user has no control over it whatsoever. Furthermore, the sole service provider has a centralised access to all the data.

How, therefore, can the system's advantages be maintained on both levels (individual and community) whilst guaranteeing total user data confidentiality?

Tristan Allard and his Inria, EDR and LIRMM colleagues have devised a solution: the Chiaroscuro project.

"Our system extracts representative user profiles from time series on millions of personal devices (laptops, smartphones, tablets) without endangering individual private lives”, explains Tristan Allard.

"Chiaroscuro" refers to "light-dark", a term used to describe a technique using contrasts between light and dark in Italian Renaissance paintings. The aim of the Chiaroscuro project is to describe the internal workings of algorithms, which mix "uncoded" elements (unencrypted but disrupted) and other "dark" elements (which are completely encrypted).

Chiaroscuro avoids two major obstacles which are often present in personal data analysis schemes.

  1. Time series are not copied to a central server (in fact, no individual time series information leaves a personal device without having first been protected).
  2. The system does not use a resource-intensive encryption protocol. The latter, however, accepts arbitrary participant connections and disconnections.

"Chiaroscuro uses paired data analysis techniques”, says Tristan Allard. “Instead of being processed in the cloud, data is processed in a distributed and secure manner between personal devices and individuals themselves, according to the principles of non-supervised classification. We propose an innovative approach which combines encryption processes and data anonymisation so that only aggregated and disrupted information is revealed during processing."

The system is based on several mathematical and statistical tools:

  • iterative algorithm, known as K-means, to create typical time series profiles;
  • standard profile protection (centroids) during encryption by adding digital noise which is generated using a distributed manner by participants' devices so that the differential confidentiality model is satisfied (the current de facto standard);
  • an additive homomorphic encryption system which enables the addition of encrypted data without having to decrypt them first.

In the system, each participant has part of the encryption key and therefore a minimal number of peers is needed to decrypt the data. In this way, the creation of a "trusted third-party" is avoided. The latter would participate in the analysis algorithm whilst still being unknown to the participants and would have both data access and results management.

Chiaroscuro is demonstrated online and was recently presented during SIGMOD 2015, a science symposium, and at the ICDE 2016.

There is, therefore, a technical solution. It remains to be seen if the online service operator business model will be able to adhere to it.

A question for Tristan Allard: Google, Apple and differential privacy

Apple is following in Google's footsteps and has just announced that user privacy will be reinforced by differential privacy during data collection. This will be made possible by iOS10, the new version of Apple's operating system. What are your thoughts on this?

"Differential privacy is a formal model of statistical, published information which respects user privacy. Dwork, a Microsoft Research researcher, created it in 2006 and it is currently being studied by many research projects. Researchers are interested in this model due to its easy implementation (noise addition to a sum and random byte switches in a byte table) and also due to the good formal contents it offers. Besides research, however, this model is still today infrequently used in practice. But recent interest from internet giants, beginning with Google, for collecting certain statistics from its browser, Chrome and now Apple lead us to believe that things could change."

"Even though Apple's announcement proves that it is trying to reassure its iOS 10 users, there are still many concerns as to whether the new information collection processes will respect their private lives. Apple has not disclosed any information as to how the latter will work. It is therefore impossible to check whether the disruption is successful and provides sufficient protection. For example, the level of protection in differential privacy can be configured: the higher it is, the stronger the disruption must be. But the stronger the disruption, the harder it is to analyse disrupted information. Where will Apple position itself in response to this "private life vs. usefulness" compromise? Furthermore, it appears that the level of protection is diminished with the total amount of information collected for a person. Is Apple limiting this quantity and if so, how? In a nutshell, following Apple's announcement, a description of the used disruption technique, its implementation and configuration should be made available online as a second step in gaining user trust in these new collection processes."

"But, even before set up and differential privacy issues are discussed, users must be able to express their consent to data collection in an explicit and informed manner. However, if current iOS users accept to provide Apple with "Diagnosis and Usage" reports, they are clearly accepting "private data" collection without any precise information about which data is collected or even about the granted level of protection. It is, therefore, difficult to classify this type of consent as "explicit and informed"."

Where does the Chiaroscuro project fit in to all these innovations?

"Chiaroscuro is unique in the sense that it articulates between encryption and differential privacy. More specifically, through Chiaroscuro, we are attempting to calculate the k-means algorithm on distributed and confidential data. Ideally, we would like to apply all k-means operations on encrypted data (and therefore without revealing any data information). The problem is that we only know how to effectively carry out very simple calculations on encrypted data (only additions, for example). The idea behind Chiaroscuro is to be able to do everything we can on encrypted data (mainly additions) and use differential privacy to reveal the results of the additions (disrupted) and carry out other operations with it (especially comparisons and distance calculations). Chiaroscuro got its name from the combination of light (the part of the algorithm which is carried out on revealed information after having been disrupted) and dark (the part which is carried out on encrypted data)."

Links: SIGMOD2015 and ICDE2016 symposiums

Recommended by Tristan Allard:

Self-data research: team work on an alternative business model where users control their own data.

Fing: My Info project.

DRUID team

Tristan Allard is an IRISA DRUID team member which includes all ESIR, ISTIC and Lannion IUT Université de Rennes 1 teaching staff. DRUID focuses on data management system creation and analysis and information fusion.

DRUID aims to offer data management systems which take source incoherence into account (human beings, sensors, etc.). The main applications are crowdsourcing (participative production) and major social network management and analysis”, explains David Gross-Amblard, DRUID co-director.

As part of this framework, DRUID naturally takes security issues into account (protection against malicious actions and confidentiality and privacy). The Lannion site studies user trust (Belief Function Theory). DRUID team members in Rennes are focusing on optimising participative production platform task assignment so as to take participants' skills into account. The platforms under study belong to two categories. The first deals with internet-organised work (Amazon Mechanical Turk, Foule Factory) and the second deals with participative science (SciPeople, Vigienature).

One of the main aims of these projects is to improve platform participants’ skills qualification and management, especially through prioritisation. The research was recently presented at the WWW’2016 conference, where it was awarded the prize for best PhD article.”. David Gross-Amblard is understandably proud of this award

DRUID also focuses on the grey areas of participant security (presence and skill confidentiality) as well as that of the task assigners (the legality of the assigned task, etc.).

Tristan Allard

Tristan Allard is a Université de Rennes 1 IT lecturer and teaches both at master and degree level. His classes are based on the publication of privacy-respecting data, data bases and their security and programming. His research is based on the daily use and large-scale analysis of personal data, whilst protecting users' private lives: a key issue in knowledge society development.

David Gross-Amblard

David Gross-Amblard, IRISA DRUID team co-director is an IT professor at the Université de Rennes 1 (ISTIC). His teaching is based on general data management and his research focuses on current topics such as crowdsourcing, social networks and data security.
Securing connected cars

A third of new mobile phone contracts in the US in 2016 are with... cars.

Our vehicles are becoming increasingly connected. Almost all the current models can be linked to a smartphone or even have their own SIM card. They can indicate their position to manufacturer breakdown services and traffic-jam mapping applications.

While the autonomous car is developed, the industry is creating data-exchange systems between vehicles. The aim is to make traffic secure. Two connected cars which are about to crash into each other could mutually warn each other and activate an emergency brake system. Vehicle convoys could benefit from petrol savings and increase security levels.

Cooperative driving is based on a particular type of wireless mobile network known as "ad hoc" where the network is created between connected objects without the need for a central hub. It is this type of network in particular which enables NGOs, upon their arrival at a natural disaster site for example, to roll out a wireless communication infrastructure within several minutes.

But, let us concentrate on connected cars. Their development has generated a fair amount of questions. The first question deals with transferred data confidentiality: the connected vehicles which are currently being manufactured transfer their information (speed, direction, position) about ten times per second without reliable encryption, therefore making it possible to track their speed, direction and identification in real-time. On the other hand, hacking and identity theft in a vehicle network could have very serious consequences: from simple tracking to intentionally-caused accidents. Recent examples already show a vehicle's control being taken over by distance using its infotainment wireless connections.

For Gilles Guette, “Increasing a connected vehicle's security is, therefore, a genuine cybersecurity issue. As for vehicle networks, I am studying the conception of user-privacy solutions and I am working on safe exchanges between moving vehicles, under specific constraints.”

As a researcher, he is studying how to manage a vehicle network specificity: the highly variable length and type of connection between network elements.

Let us take the first type of connection: a smartphone to its owner's car. This is infotainment connection and is lengthy because it is created and persists each time drivers use their cars. It enables drivers to use the phone hands free, listen to music and even surf the web on the vehicle's computer, etc. It needs authentication and encryption to remain confidential and safe from attacks and tracking.

The second type of connection is much more transient: connections from cars to dedicated cooperative driving posts on the roadside or between two connected vehicles which are at a risk of crashing into each other. These connections are part of cooperative driving and must be authenticated to avoid hacking. They cannot be encrypted because the required calculations would increase car reaction time (e.g. automatic braking) in the event of an imminent crash. Furthermore, data confidentiality is not required as this information must be provided for all nearby vehicles.

Of course, car manufacturers' primary concern is security; user-data confidentiality and privacy take second place.

The real question is whether user authentication, in terms of cooperative driving, has to be anonymous or not.

Anonymous authentication exists and works”, says Gilles Guette. “but it needs calculation time and the process has to be repeated each time a connection is made. Car manufacturers, however, think that technical constraints will prevent this type of connection from being anonymous.

It goes without saying that anonymity is not in the best interest for many involved: insurance companies need to be able to designate the guilty party in the event of an accident, police seek to easy roadchecks, and car manufacturers are constantly refining their business models, etc.

Gilles Guette is working on the theory of these systems and is producing proofs of concept which are based on the definition of security, trust, anonymity and confidentiality within such networks. He carries out wireless and ad hoc network operational simulations (experiments, dimensioning).

I started by working on the use of an embedded TPM security module in vehicles, the anonymization of communications and then on an anonymous reputed system”, says Gilles Guette. “I then studied how to detect the creation of fake nodes in the mobile network: I created an attack scenario where a malicious user creates tens of fictive vehicles from his/her car to simulate a traffic jam and slow down cooperative driving cars. Then I analysed anonymous proxy signatures by which the authorities could grant a "power of signature" to the car so that it could authenticate itself in the network but in a totally anonymous manner”. The systems include a measure to revoke anonymity if it is required in a court of law.

Being able to correctly identify the guilty car after the event is no easy feat”, says the teacher-researcher. “If a driver manages to steal another driver's identity and causes an accident, the driver whose identity has been stolen is the guilty party. This has already been witnessed when music and films are illegally downloaded or when internet users have had their computer address pirated and have had to prove their good faith.

It is clear that connected vehicle networks must be reliable. Gilles Guette's work focuses mainly on how this will operate when the majority of cars will be equipped with cooperative driving features. For example, in response to high roadside costs, many car manufacturers are choosing to insert mobile phone SIM cards in vehicles for data exchange. Gilles Guette concludes that,

This system is cheaper to roll out but will it be able to deal with the expected increase in use and data traffic?

Gilles Guette

Gilles Guette is a lecturer at the Université de Rennes 1 and a member of the joint Inria and IRISA CIDre team. He specialises in system and network security, particularly dedicated vehicle networks (VANETs). Gilles Guette teaches IT at ESIR (one of the Université de Rennes’ 1 two engineering schools), to engineering school, degree and masters students. His students learn about security and wireless networks dimensioning, especially home automation: an area in which confidentiality and security are of prime importance.
Software safety in the cockpit

How can we ensure that software never bugs? This is a crucial question when computers manage critical applications, such as the traffic on an automatic metro line or the electric flight controls of a plane with more than 500 passengers on board.

However, this type of software is composed of tens of thousands of lines of code and operates in a plane whose flight parameters change continually.

How can we ensure that the computer will always continue to control the flight commands, be it for routine operations or in an exceptional situation? This is very difficult to test or simulate except during a real flight.

Extra precautions are taken when developing critical software: much more than when developing simple smartphone games, for example”, outlines Sandrine Blazy.

Even if zero risk is impossible to achieve, everything is done to guarantee the safest software possible on various levels. Several software tools in particular are used to perfect a critical code and it is important that reliable software is used for this.

To begin with, even before they start to write the programme, experts define the behaviour expected from future programmes. To avoid any ambiguity, these specifications are formulated according to precise mathematical notations and are clear for all who use them. The programme is then written in adherence with closely monitored procedures using specialised tools”, explains Sandrine Blazy.

Compiler

Compilers are at the front-line of computer software tools. In most cases, a programme written by a human being can not be directly used by a computer: it has to be "compiled" to machine language before it can be processed. However, compilers often create bugs: the operational code they generate does not exactly match the developers' source code instructions. This can lead to errors in the created programme and to potentially fatal consequences in terms of critical software.

Proof assistant

The researcher uses a third type of software - proof assistants - to develop a "zero error" compiler. One of them is called Coq. It was created using Gallina language somewhere in France, as you may have guessed. It uses mathematics and can be used to prove mathematical theorems or the reliability of a compiler's operations, providing that its code is written in a manner that can be examined.

The strength of mathematical formalism is that it ensures the safety of the compiler's operations, a priori, that is to say without having to submit the latter to a series of in-depth tests”, explains Sandrine Blazy.

This is how CompCert, Inria's "Coq-proofed" compiler, to which I contributed, is today sold to manufacturers who need to develop critical software.

Other tools

The methods which Sandrine Blazy and her colleagues have developed and use also apply to securing code for a static analyser which aims to guarantee the absence of bugs on compiler-generated codes without even running them.

Sandrine Blazy concludes, “We are working on checking a third tool which is used to estimate the "worst running time" for critical software.

Obfuscation

The use of such tools can also serve to check that a programme which has been rewritten in an ambiguous manner so that it cannot be disassembled (hacked to reveal its source code) continues, however, to strictly carry out the same tasks with the same level of reliability. This is known as secure obfuscation.

Perspectives

Sandrine Blazy's current research is centred on software security and will soon be applied to checking certain security parameters.

Links:

Sandrine Blazy

Sandrine Blazy is an IT professor at the Université de Rennes 1. She is head of the Master 2 IT Research programme and teaches the use of proof assistants, functional programming, formal methods and the study of software vulnerabilities. She coordinates the "Languages, types, proofs" working group within the framework of the national "Programming and software engineering" research group. Sandrine Blazy is a researcher and IRISA CELTIQUE team member; she studies formal compiler and static analyser monitoring with the help of the Coq proof assistant. In particular, this entails the mathematical formalisation of programming language semantics on which this software operates.
1000 scenarios for one burglary: please draw me… an attack tree!

You are responsible for the security of an important building. The employees in the offices of this building are mostly secret defence officials. The documents they produce and the computers on which they work must be totally safe from theft and damage. However, this building is enormous. How are you going to stop a vandal from breaking in and stealing a document or sensitive data?

Are you planning to install an entire alarm system with armoured doors and windows, badge readers and security cameras? That's fine, but where exactly are you going to install your sensors? How can you be sure that your protection is fool proof? Is there an entrance to the building that you may not know about? Will your staff be given the most efficient security instructions?

This is where attack trees come into play. In our example, attack trees will represent, in chart form, possible break-in sequence logic so as to flag up the building's weaknesses. The aim is to create counter-measures which will compensate for the building's intrinsic security loopholes.

They are of course much more complex in that the building in question has break-in possibilities. The real DGA cases are so complex that IRISA researchers have been asked to help create computer-generated trees.

The tools are, today, applied to real buildings and will be used to map IT network vulnerabilities.

How to draw a tree... an attack tree

Logic A scientists create attack trees by taking into account the building's specifications as provided by the experts. The scientists also define the main objective. In the following example, the aim is to steal a document from the director's office.

The scientists use specifications and objectives to generate all the possible pathways within the building.

Then, they isolate "winning" pathways which lead to the desired object.

They create a pathway abstraction.

They merge simple "attack trees" which are generated by following the different pathways.

Are you ready? Let's go!

Pervasive computing: integrating real and virtual elements

Have you heard of pervasive computing? This is an environment where elements of the real and virtual worlds mix to such an extent that the lines which separate them appear invisible.

Today, if you want to integrate a real-life element to its virtual counterpart, all you need is a sensor or e-label. Both are very cheap in price. The labelled place or object is introduced to a widespread IoT system for use in highly varied applications.

Take the example of a forest in which some equally spaced apart trees are fitted with a low-energy, mostly inactive, temperature sensor. In the event of a fire in the area, one of the sensors would send a geolocalised alarm. The alarm would be picked up by a connected receiver station and would trigger a fire station response (or perhaps drones) at the precise area where the fire had started. The emergency teams would therefore be able to nip a forest fire in the bud.

Pervasive computing usage is currently rising and is used in RFID-connected personal objects which are on passports, library documents and items for sale in shops, etc.

Frédéric Weis, lecturer at the Université de Rennes 1 manages the joint TACOMA Inria and IRISA team which specialises in pervasive computing.

Our main aim is to build systems which guarantee user privacy.” explains Frédéric Weis.

"We are currently witnessing the advent of systems whereby millions of connected objects (smartphones, fitness sensors and automated devices such as "intelligent" thermostats) send their data to a centralised server bank: the infamous cloud."

The manufacturers, therefore, has exclusive control over operations. The user can no longer fully control the collected data. We are working on systems to reposition information processing closer to users so that they regain control over the data they generate”, says Frédéric Weis.

Connected objects have limited power and battery level autonomy. This means that they generate more compact and more synthetic data less frequently; the latter is strictly necessary to the desired feature. Thus, for the user, network strength, energy footprint and impact on privacy are all reduced.

The IETR RFID test platform

On an operational level, pervasive computing networks require vast technological improvements if they are to meet expected functionalities.

One method of connecting objects to pervasive computing networks is by placing an RFID microchip on them. Passive RFID microchips do not contain batteries. They receive remote power when they are placed near a scanner. It sends waves of a certain frequency and these activate the RFID circuit. The latter then replies by emitting an identification code which is captured and recognised by the scanner.

Passive RFID microchips are different to bar codes as they do not need to be directly visible to the scanner and can be inserted in the object.

Why then is not possible to simply push a shopping trolley in front of an RFID scanner to obtain the total cost of our shopping in the blink of an eye?

This is because RFID microchips are not fully reliable: RFID is a little hype yet”, says Paul Couderc with a touch of humour. This is visible when we have to scan our "contactless card” several times in the metro or bus to get them to work.

The system needs to be improved as there are RFID interferences and scanning errors,”, says Paul Couderc. “In order to do so, we have created a partnership with the IETR and its antenna system and radio frequency communications specialists. The IETR is 100 meters from IRISA on the Beaulieu campus. Together, we are creating a platform which enables RFID communications errors to be characterised. We do this by studying the device's antenna and look for improved scanning possibilities.

Take an example similar to that of the supermarket trolley: a bin bag full of rubbish with individual radio tags. The aim is that contents of the bag, which has been thrown into an "intelligent" bin, is identified so that the recycling chain can be improved.

The problem is that in the case of a rubbish bag, many RFID tags are crushed together in a small space or they all point in different directions or are close to disruptive elements (for example, aluminium and other metals).

Many scanning errors will result from these conditions. The TACOMA team, in particular Frédéric Weis and Paul Couderc, has joined forces with the IETR to create a test platform. This is made up of a motorised detector which is used to move a group of RFID tags and scanning antenna to reproduce problematic situations and to try to find solutions.

Frédéric Weis

Frédéric Weis is a lecturer at the Université de Rennes 1 and teaches internet technology and local wireless networks, network security and administration at Saint Malo IUT where he is head of the "Wireless Networks and Security" degree programme. His role as research member of the joint Inria and IRISA TACOMA team leads him to study wireless networks, mobility, pervasive computing, automation and sensor networks. The professional Network Security degree programme trains managers with experience in three complementary sectors: business and system network deployment, joint network and system security and very high speed operator access technology skills.

Paul Couderc

Paul Couderc, chercheur à Inria…
Universite de Rennes 1 - a key player in cybersecurity training

The Université de Rennes 1 stands out for its renowned expertise in cybersecurity on a European level.

The available University courses are taught by professors who are also leading researchers within the sector. Furthermore, the Université de Rennes 1 teaches the entire range of cybersecurity subjects: mathematics, law, IT, electronics and telecoms. On a wider scale, the vast array of partnerships with sector leaders (institutions, leading business schools, businesses, the DGA) within the Pôle d’excellence cyber breton make Rennes Métropole the ideal starting point for students, PhD students and researchers who are interested in cybersecurity.

The Université de Rennes 1 teaching-researchers' excellence is reflected in their involvement in ANSSI's Cyberedu programme - a national programme which aims to implement a basic foundation for security teaching in all French IT qualifications.

Training

At the Université de Rennes 1, cybersecurity is taught from the first year of general IT degrees and is studied more thoroughly in professional degrees and masters (IT, mathematics). Several security options may be elected:

These qualifications are partnered by many businesses, including ANSSI, the DGA, Sopra Steria, Orange Business Services and Retis communication. Most partners offer security internships and their staff come to the Université de Rennes 1 to make presentations to the students. The qualifications are also supported by academic partnerships (CentraleSupélec, INSA, etc.) and research institutions (Inria, CNRS).

CyberSchool, the French Graduate School in Cybersecurity

  • Based on a complete interdisciplinary program, the CyberSchool training offering allows you to create your "à la carte" training program on the challenges of digital security.

Research and research-based training

As part of the Pôle d’excellence cyber, the Université de Rennes 1 has united, on various levels, more than 120 people to work on cybersecurity: 23 teams work in four joint research units.

  • The IRISA (Institut de recherche en informatique et systèmes aléatoires - the Institute for Research in Computer Science and Random Systems);
  • The IODE (Institut de l’Ouest : droit et Europe - Western Institution of European Law);
  • The IETR (Institut d’électronique et télécommunications de Rennes – Rennes Institute of Electronics and Telecommunications);
  • The IRMAR (Institut de recherche mathématique de Rennes - Rennes Institute of Mathematical Research).

These laboratories work closely with the Pôle d’excellence cyber's partner businesses and some co-fund theses via CIFRE. The DGA directly funds a dozen PhD theses per year across the board.

Jean-Marc Jézéquel, Université de Rennes 1 University lecturer and IRISA director, is also in charge of coordinating research within the Pôle d’excellence cyber.

It is also of note that IRISA created a research team dedicated solely to embedded security systems (EMSEC) in 2016. This team is already highly renowned: it has two IUF members and is in charge of an ERC-funded project.

Liens : IRISA, IODE, IETR, IRMAR, CyberEdu, PEC

IT system security management at the Université de Rennes 1

As a conclusion to this cybersecurity overview, it seemed pertinent to present the real-life challenges which the Université de Rennes 1 teams came up against when securing the University's information system (IS).

Interview with Serge Aumont, Université de Rennes 1 Information Systems Security Manager (ISSM) since 2011.

The University's information system involves all types of information, both digital and others, which are mainly involved in:

  • timetable, exams and grades management
  • the University's finances and accounts
  • staff and salary management
  • users' digital services (email, VWE, storage, data back up and history, electronic documentation, etc.)
  • research-generated data
  • teaching supports (MOOC)

These are sensitive topics, at various levels, for the Université de Rennes 1 user community (almost 33,000 people; 29,000 students and 3,900 staff). We understand that these systems must be secure if the Université de Rennes 1 is to operate efficiently. They are very substantially computerised meaning that the entire security system is an integral part of the institution's digital infrastructure:

  • 5,000 workstations
  • 80 terabytes of daily network traffic on the 450 IT-managed servers
  • 160 internet domains
  • almost a thousand web servers
  • 358 terabytes of internet storage space is saved and available for UR1 users
  • inside and outside connections through RENATER, a high-speed academic network

A constantly threatened system

The vast extent of the Université de Rennes 1 IS means that it is vulnerable to attacks.

Not all the attacks take place via internet. During the Université de Rennes 1 recent governance restructure, a telephone scam attempt targeting the Presidency accounts department was, thankfully, foiled by vigilant administrative staff. The attacker knew that the Université de Rennes 1 Presidency team and departments were being restructured.

On a purely digital IS level, attack types evolve over time and also in reaction to improved protection measures. In 2011, website hacking was widespread but this is less the case now.

In saying that, internet-exposed machines are permanently under attack. Two thirds of internet emails are spams (1.2 million messages out of a total 1.8 messages per month).

In this way, it is understandable that the CIO teams are cautious on many levels: software and equipment obsolescence, phishing e-mails which are highly efficient and therefore destructive, spyware programmes which filter off confidential data such as user passwords, Trojan horses which hackers use to take over the controls of a hacked computer and even denial-of-service attacks on bridgehead internet material which could disconnect the entire Rennes 1's internet system.

Serge Aumont's role as Université de Rennes 1 ISSM

Serge Aumont, the university’s ISSM (Information system security manager) and his colleagues are responsible for managing security incidents: impact analysis, adjustments, repairs and devising prevention methods.

Serge Aumont categorically rejects the "super-geek clichés" about his job, and refuses to be photographed in the highly impressive computer server room where such an ultra-technological presentation is to often used as a wake-up call for a variety of audiences. He describes his work as an exercise in shared dialogue, formalism, and even, he points out with a touch of humour, a good deal of red tape.

ISSP: definition

The ISSM works closely with the Director of Information Systems (DIS) and the president of the Université de Rennes 1. Serge Aumont is an expert in SI security government principles (availability, integrity, confidentiality, trackability) and adapts them to the Université de Rennes 1. As ISSM, Serge Aumont’s job is to suggest IS risk management methods to the institution's political bodies. The latter can then knowingly make their decisions according to the means available and the impact of such methods on the university community. There are residual risks for each decision and these are taken into account.

The following are the typical steps of an Information Systems Security Policy (ISSP):

  1. definition of security aims: availability, integrity, confidentiality and proof
  2. analysis of risk and potentially dangerous events
  3. formalisation of risk reduction measures
  4. political arbitration
  5. application
  6. assessment
  7. experience taken into consideration, new cycle since step 1

His methods

Serge Aumont has to keep an open dialogue with the Université de Rennes 1 community if his work is to be effective. His tasks require close collaboration with his IT colleagues (user support and system and network infrastructure managers, etc.) and the other information system players.

Besides daily and collaborative security incident management actions, Serge Aumont's main activity consists in drawing up and implementing procedures.

His actions

For example, it is Serge Aumont himself who helps to define the necessary knowledge base for the Université de Rennes 1 IS department on a "need-to-know" basis for those involved. This expression comes from the security sector and means that people who are authorised to know an operational secret only need to know the part of the secret which applies to their function. The aim is to limit the information's "surface area".

Thus, when Serge Aumont showed us the IT computer room in Beaulieu, he brought us on the visitor-authorised "public circuit". We were able to take photos but we did not see anything which was confidential.

In reality?

Serge Aumont supervises infrastructure security measures and ensures their coherence. Here are several examples:

  • Cloned computer rooms with restricted access
  • Flow cryptography (RENATER certificate management)
  • Data backup and history for each user
  • RENATER anti-spam filters, antivirus roll out across the entire network
  • Random social engineering tests (phishing) to alert users to this type of attack, of frightening efficiency when it is carried out with precision
  • Annual shutdown and reboot test for the Université de Rennes 1 entire infrastructure and digital services

Serge Aumont works with the identity management renewal service. The latter provides user authentication and checks users' rights when they connect to the information system services. This is no easy task because the system has to take departures, arrivals, job transfers and registration handovers into account in conjunction with software used by Human Resources and the admissions department.

Perception

Users may feel that IS security policy measures are restrictive but it is important to understand what the measures prevent.

It is for this reason that the institution runs a yearly complete server shutdown over the course of a weekend. All the digital services are stopped in a controlled way: telephones, email, intranet and internet access, print servers and management software, etc. For a university of more than 31,000 members (students and staff), the impact is very high. However, shutdown and reboot tests are vital for highly complex systems: weeks of disturbed services would be avoided in the event of a major infrastructure incident.

What about research?

One of Serge Aumont's aims is to develop IS research security at the Université de Rennes 1: for the moment, most units use their own means or those supplied by co-supervisors (CNRS, Inserm). This is a problem in terms of the "Politique de protection du potentiel scientifique et technique de la nation [national scientific and technical potential protection policy]" regulatory requirements. This is coordinated by the campus security and defence official.

Links:

Serge Aumont

Serge Aumont is the Université de Rennes 1 ISSM since 2011 and holds a DESS (professional Master 2) with a specialisation in IT Systems. He took part in creating RENATER (he was a member of the university network committee before it became RENATER) and has led the national ISSM network. Serge Aumont published an article on X.509 cryptography certificates and took part in developing SYMPA, a mailing list management tool.
The beginnings of Cybersecurity

Firstly, concentrate and think about your:

computers, smartphones, tablets, televisions,

watches, fitness sensors, connected cars,

travel passes and bank cards, etc.

Next, imagine that they have zero connection: as if they have been completely cut off from internet and all database networks.

At the very best, their use would be severely limited. But, for most of them, the only practical option would be to hang them on the wall in memory of times past because cyberspace would be no longer.

Because we use them every day, we forget the vital link that these daily objects share with the virtual digital world: they have become passages to cyberspace, similar to when Alice in Wonderland disappears through the mirror.

Real-life possibilities are exaggerated in this new Wonderland: distances no longer exist, long-distance travel time takes a fraction of a second and our data takes a world tour in the blink of an eye. In cyberspace, we can communicate on a one-to-one basis or with a huge crowd. We can seduce from our sofas, find information, work, take part in never-ending, virtual battles and the most passionate of debates. The world's entire knowledge base is at our fingertips and commerce is booming. Those who have lived most of their lives in the pre-internet era are regularly amazed whilst younger generations take innovation in their stride.

As a result, the virtual world and real-life have become one. Certain daily events could not exist without interaction between a physical device and cyberspace, be it near or far. Sometimes the moment of connection from real-life to the digital world is highly apparent (social network logins for example) and sometimes, it is completely unnoticeable: going through metro and airport turnstiles, contactless payment at the bakery and opening a car with hands-free key fobs. These actions are affecting our lives more than we realise: just consider the number of children born as a result of real-life/virtual world encounters on social networks.

When cyberspace was first introduced outside the labs in which it was created, it was initially perceived as a childish dream of superpowers. Internet's first big breakthrough to the general public was in 1996, the year in which NASA broadcast almost real-time Pathfinder robot images from Mars. All internet users were suddenly virtually teleported to the surface of the red planet. And the smart cards we use for electronic payment are already thirty years old.

An extraordinary building game has developed on an international level to create the digital world. Billions of blocks (computers, connected objects and data centres) have become progressively linked to gigantic wireless connections and radio intersections to create the internet infrastructure. Technological exploits happen on a daily basis to "route" packets of data between users as quickly as possible, regardless of wherever they may be: submarine cables are laid, satellites launched, data centres the size of villages with hundreds of thousands of server-computers, etc. are created.

The very furthest tips of this web-like structure can be actually touched: they are the plugs the internet providers install in our homes. (ADSL, cable, optical fibre). This is also seen when we try to "get a signal", in areas with low network availability, by holding our mobiles up to find the nearest relay antenna. However, we can only see the smallest part of cyberspace and this is much smaller than the renowned "tip of the iceberg". The digital world is today as large as planet earth itself: sometimes embracing and sometimes magnifying the differences among its human population.

There is, however, another less-known aspect without which internet would not exist today: cybersecurity.

The child's dream of yesterday’s internet was closely followed by prosaic realities and the development of cybercrime: the more the network grew in terms of electronic data from all levels, the more this mass of information became valuable to online hackers. To begin with, the scientists' internet data was unencrypted, meaning that anyone connected to the same network had access to, and could read, the data. Today, it is out of the question to connect to online accounts without a user name, password and even two-factor authentication.

Individual data encryption became necessary because of three main uses: commerce and electronic finance, social networks (including online games) and dematerialised administration. Emails are a remarkable exception as they are always mostly checked and written in unencrypted form; this non-secure format will undoubtedly peter out.

The links between our real lives and our digital activities have become so vital that an attack on their digital data would result in major problems in the real world. The three most frequent risks today for any given person (all three are often grouped together in one attack) are bank fraud, identity theft and digital harassment or blackmailing.

On a larger scale, these threats are a risk for:

  • industrial sites and infrastructures who need to protect themselves from an exterior takeover (nuclear power plants, electricity networks, public transport);
  • businesses (whose databases and stocks are potential targets);
  • countries (cyberspying is now widely practised);

Over the past few years, cybersecurity has therefore become a major stake for politicians, the military, economists, journalists and non-governmental associations, etc.

Today, cybersecurity is a State affair. In a geopolitically complex time marked by the ever-present risk of terrorist attacks, tensions have arisen between the general public, wishing to protect its right to privacy on the one hand, and governments wishing to develop their cyberspace surveillance. The major economic operators (Google, Apple, Facebook, etc.) are positioned in the middle and are making a profit from using their customers' personal data. These key players may only gain genuine customer loyalty by providing the latter with privacy, whilst adhering to the laws of the countries in which they operate.

Links:

Alice, Bob, authentification, sécurisation : petit vocabulaire cyber

Here is a list of frequently used cybersecurity terms

Identification: to communicate an identity to a service (for example by using a user name and password, personal key, etc.).

Authentication: declared user identity validation.

Integrity: guarantees that the data being read has been written by their genuine author and that it has not been tampered with.

Confidentiality: ensures that only authorised people have access to read the exchanged resources.

Availability: guarantees that the system provides its legal users with the expected services within an adequate response time.

Non-repudiation: guarantees that a transaction cannot be denied.

Cryptology is the science of secrets and uses the following terms:

  • Cryptography: the study of techniques for secure communication
  • Cryptanalysis: the analysis of secret communications with the aim of revealing said communications
  • Encrypt: transforms a non-coded message by using a key to make it secret
  • Decipher: access the unencrypted message through its secret form using the encryption key
  • Decrypt: the use of cryptanalysis to successfully transform an encrypted message into an unencrypted message by rebuilding the decryption key which was unknown at the outset

Cryptologists often use characters, who play a precise role, to explain how their secured exchange systems work. Here are the four main characters:

  • Alice: issues secret messages to Bob.
  • Bob (Bernard in French): receives and answers Alice's messages.
  • Ève: an indiscreet person wishing to spy on Alice and Bob's conversations.
  • Mallory: is indiscreet like Eve and spies on Alice and Bob, but Mallory is particularly dangerous as he wants to use the exchanged information (intercept Alice's public key and replace it with his own, for example).

ANSSI: National Cybersecurity Agency of France. The ANSSI is the national authority for IT system defence and security. It is responsible for rolling out a wide panel of legislative and practical rules and verifying the implementation of adopted measures. It monitors, detects, alerts and reacts to computer attacks, especially on State networks.

CNIL: National Commission on Informatics and Liberty. The CNIL assists professional bodies to comply to legislation and helps individuals to manage their private data and to exercise their rights. It analyses the impact of technological innovations and new usages on privacy and civil liberty. The CNIL also works closely with its European and international counterparts with the aim of creating aligned regulations.