Main reasons why the app isn’t recommended

Google Messages

  • Google were implicated as an NSA partner in the Snowden leaks.
  • Google’s business model relies on collecting user information, and hence there is little incentive for Google to truly secure users’ information.
  • Google’s general record in digital privacy is poor, considering that their business model relies on collecting user information.
  • Metadata is not encrypted, and hence timestamps/location/sender/receiver/etc. is not encrypted. Metadata gives intelligence agencies a lot of data to deduce what people are doing by their location, who they’re messaging, when they’re messaging, etc.
  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • Data is not encrypted at rest on iOS and Android, and hence user data is not protected if the device is compromised.
  • There is very little documentation, and hence no one knows the encryption implementation details, infrastructure security, or even the security of the app itself.
  • There have been no code audit and an independent security analysis, and hence we must take Google’s word. No one can mark his own homework, including Google.

Facebook Messenger

  • Facebook were implicated as an NSA partner in the Snowden leaks.
  • Facebook’s business model relies on collecting user information, and hence there is little incentive for Facebook to truly secure users’ information.
  • Facebook’s general record in digital privacy is poor, considering that their business model relies on collecting user information.
  • Encryption is not turned on by default.
  • Metadata is not encrypted, and hence timestamps/location/sender/receiver/etc. is not encrypted. Metadata gives intelligence agencies a lot of data to deduce what people are doing by their location, who they’re messaging, when they’re messaging, etc.
  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • There is very little documentation, and hence no one knows the encryption implementation details, infrastructure security, or even the security of the app itself.
  • There have been no code audit and an independent security analysis, and hence we must take Facebook’s word. No one can mark his own homework, including Facebook.

Apple iMessage

  • Apple were implicated as an NSA partner in the Snowden leaks.
  • Apple’s general record in digital privacy is poor/mixed. For example, Apple have been paid by Google to set iOS’ default web browser to Google. iCloud is not encrypted.
  • Encryption uses weak cryptographic primitives, and perfect forward secrecy is not implemented, and, if iCloud is enabled, Apple have full access to messages.
  • Metadata is not encrypted, and hence timestamps/location/sender/receiver/etc. is not encrypted. Metadata gives intelligence agencies a lot of data to deduce what people are doing by their location, who they’re messaging, when they’re messaging, etc.
  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • There is very little documentation, and hence no one knows the encryption implementation details, infrastructure security, or even the security of the app itself.
  • There have been no code audit and an independent security analysis, and hence we must take Apple’s word. No one can mark his own homework, including Apple.

Riot / Element

  • There have been no code audit and an independent security analysis, and hence we must take Element’s word. No one can mark his own homework.
  • Matrix has had at least one embarrassing security breach, indicating that their infrastructure security is lacking.

Microsoft Skype

  • Microsoft were implicated as an NSA partner in the Snowden leaks.
  • Microsoft’s business model relies on collecting user information, and hence there is little incentive for Microsoft to truly secure users’ information.
  • Microsoft’s general record in digital privacy is poor, considering that their business model relies on collecting user information (e.g., Bing).
  • Encryption is not turned on by default.
  • Metadata is likely not encrypted, and hence timestamps/location/sender/receiver/etc. is not encrypted. Metadata gives intelligence agencies a lot of data to deduce what people are doing by their location, who they’re messaging, when they’re messaging, etc.
  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • There is very little documentation, and hence no one knows the encryption implementation details, infrastructure security, or even the security of the app itself.
  • There have been no code audit and an independent security analysis, and hence we must take Microsoft’s word. No one can mark his own homework, including Microsoft .

Telegram

  • Bespoke cryptography is not a good idea, and Telegram’s encryption implementation has been criticised by cryptographers.
  • Encryption is not enabled by default.
  • User data (phone numbers, contact information) is not protected, and hence Telegram have access to mobile numbers and contacts’ names & mobile numbers.
  • Telegram’s legal jurisdiction is unclear. This is outside my area of expertise.

Despite what many articles say, Telegram should not be considered secure. I was unhappy that I had to install the app — and give my mobile phone number to Telegram — to review it.

Viber

  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • Viber was founded by Talmon Marcoiii, ex Chief Information Officer (CIO) of the Central Command of the Israeli Defense Force & the funding is unclear. Anyone with such clear connections to the government / intelligence agencies cannot be trusted to create a secure messaging app.
  • User data (phone numbers, contact information) is not protected, and hence Viber have access to mobile numbers and contacts’ names & mobile numbers.
  • There have been no code audit and an independent security analysis, and hence we must take Viber’s word. No one can mark his own homework.

Facebook Whatsapp

  • Facebook were implicated as an NSA partner in the Snowden leaks.
  • Facebook’s business model relies on collecting user information, and hence there is little incentive for Facebook to truly secure users’ information.
  • Facebook’s general record in digital privacy is poor, considering that their business model relies on collecting user information.
  • Metadata is not encrypted, and hence timestamps/location/sender/receiver/etc. is not encrypted. Metadata gives intelligence agencies a lot of data to deduce what people are doing by their location, who they’re messaging, when they’re messaging, etc.
  • User data (phone numbers, contact information) is not protected, and hence Viber have access to mobile numbers and contacts’ names & mobile numbers.
  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • There have been no code audit and an independent security analysis, and hence we must take Facebook’s word. No one can mark his own homework, including Facebook.
  • Messages can be read by Facebook if marked as “abusive”.

Wickr Me

  • The app and backend servers are not open source, and hence no one knows the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • Recent independent audits are not published, and hence we don’t know the quality of the code, if encryption is implemented correctly, and if there are serious vulnerabilities.
  • Owned by Amazon.
  • Funded by the CIA before being bought by Amazon. Enough said.

Company jurisdiction

This matters because many countries have laws that demand that encrypted data be able to be decrypted by the government. Many other countries employ vast surveillance networks or have uncomfortably close relationships with companies when it comes to gaining access to customers’ data.

Red = Company is under the jurisdiction of a known Five Eyes partner. Or the company is under the jurisdiction of a country that is well known for [mass] surveillance.

Yellow = Company is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. Or country is known to cooperate with Five Eyes counties.

Green = Company is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. No known ties to Five Eyes etc.

Infrastructure jurisdiction

See above. In order to operate a truely global service, companies may have infrastructure in different regions of the world in order to, for example, provide lower network latency.

Red = Infrastructure is under the jurisdiction of a known Five Eyes partner. Or the company is under the jurisdiction of a country that is well known for surveillance.

Yellow = Infrastructure is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. Or country is known to cooperate with Five Eyes counties.

Green = Infrastructure is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. No known ties to Five Eyes etc.

Implicated in giving customers’ data to intelligence agencies

This matters because companies can be forced by law to give customers’ data to intelligence agencies. Other known methods by which these agencies can get customers’ data include coercion, hacking, planting an employee, or simply asking nicely. I’ve used the term “intelligence agencies” to refer to any government agency.

Note: I have considered “customers’ data” here to mean customers’ content/messages (data, not metadata). Wickr, for example, do co-operate with law enforcement agencies — as all companies must — but they can only hand over metadata because the content is encrypted (and they don’t have the keys).

Red = The company has been implicated in giving customers’ data to intelligence agencies. This is proven by evidence.

Yellow = The company has been implicated in giving customers’ data to intelligence agencies. There is no direct evidence, but the source is reputable.

Green = The company has not been implicated in giving customers’ data to intelligence agencies.

Surveillance capability built into the app?

This matters because some jurisdictions mandate that certain systems must have surveillance access for governments.

Note: While many American companies were implicated in the Snowden leaks — the PRISM programme specifically — I’ve considered this for the app only. If they are part of PRISM, this is considered under “Implicated in giving customers’ data to intelligence agencies”.

However, in saying that, I assume that Facebook, Google, Apple, and Microsoft have all granted government backdoors to their apps for intelligence agencies. But apart from Microsoft, there is no proof that I could find.

Red = Confirmed. The app was specifically designed to enable surveillance.

Yellow = It’s widely accepted that the app was designed to enable surveillance based on evidence from a reputable source.

Green = No… not that we know…

Does the company provide a transparency report?

Many companies periodically publish a transparency report. This details what type of requests have been received from governments, how many requests were made, how many customers were affected, etc.

Red = Company does not provide a periodic transparency report. (Or it’s not particularly useful.)

Green = Company provides a meaningful transparency report periodically.

Company’s general stance on customers’ privacy

This matters because companies often talk a big game when it comes to customers’ privacy. How often have you heard this after a data breach? “We care deeply about our customers’ security/privacy and have industry-leading security in place”.

Red = Company does not design its systems to collect minimal  customer information, or does not have strong encryption/security controls; or does not have a simple, readable privacy policy and terms & conditions. Or the company is known to cooperate with legal (or informal) requests for customer information. Or the company’s business model relies upon users’ data.

Yellow = I’m not sure that there is a middle ground. I’ll write this if I ever think that it’s appropriate for an app company.

Green = Company designs its systems to collect minimal customer information; has strong encryption/security controls; a simple; readable privacy policy and terms & conditions. The company is unable to hand over user data to governments even if asked. Likewise, the company is known to fight legal challenges to decrypt or otherwise hand over customer data. The company’s business model does not reply upon users’ data.

Funding

This matters because “money talks”, as the saying goes. If the company or person behind the money is likely to have reason not to protect customers’ privacy, it’s important to know. This could be indicative of the company not doing as they say (Google, Whatsapp, for example) or changing their mind once they’ve onboarded enough customers from whom they can make money.

Red = Funded by a company/person that/who is well-connected to, or well-known for, collecting customers’ data. Or they are known for collecting customers’ data or cooperating with the authorities when it comes to requesting customers’ data.

Yellow = I don’t know if there is a middle ground. If there is, I’ll write about it when it happens.

Green = Funded by companies/people that/who either have a vested interest in, or no obvious reason against, encrypting/securing customers’ data. They mustn’t be known for collecting customers’ data or cooperating with the authorities when it comes to requesting customers’ data.

Company collects customers’ data

This matters because many companies use customers’ data for advertising, for improving their services, or simply to sell to other companies. Do you truely believe such companies want to protect your messages if they normally make money from your personal data?

Red – Yes, they collect more than is required for the functioning of the secure messaging app. Indeed, they collect other customer data for other parts of their business.

Yellow = They collect only the minimal amount (cellphone number or email address, for example) of customer data to provide a secure messaging app.

Green = They collect no user data. (I’m assuming here that you can buy the app, if required, in an anonymous fashion.)

App collects customers’ data

This matters because many companies use customers’ data for advertising, for improving their services, or simply to sell to other companies. Do you truely believe such companies want to protect your messages if they normally make money from your personal data?

Red – Yes, they collect more than is required for the functioning of the secure messaging app. Indeed, they collect unprotected customer data or the messages sent themselves.

Yellow = They collect only the minimal amount (cellphone number or email address, for example) of customer data to provide a secure messaging app.

Green = They collect no user data.

This requirement is based on the permissions according to Apple’s app store.

Is encryption turned on by default?

Self-explanatory.

Red = No.

Green = Yes.

Cryptographic primitives (key exchange, symmetric encryption, authentication and integrity)

Specific key exchange, encryption, and hashing algorithms are considered secure by cryptographers. It’s important that algorithms without known weaknesses are used. These are the building blocks upon which secure encryption is built.

Note that I have not considered the if the implementation of these building blocks is sound.

Red = App uses cryptographic primitives that have been broken. Practical attacks against them exist.

Yellow = App uses cryptographic primitives that are considered weak. However, there are no known practical attacks against them yet.

Green = App uses well-known, secure cryptographic primitives that provide post quantum protection.

Are the app and server completely open source?

This matters because a fully open source app can be audited by the industry. Open source code leads to near full transparency: we can tell if a company’s claims meet reality. Likewise, we can find any vulnerabilities in the software, weaknesses in the implementation, or design deficiencies. The server code must also be open source; this is because all apps use a central directory service to match users. Vulnerabilities and backdoors could exist in these directory services.

Red = No.

Green = Yes.

Are reproducible builds used to verify apps against source code?

Are you sure that the app you downloaded from Google and/or Apple is using the exact code source that the developers published? Reproducible builds are a method by which installed apps can be compared to published source code, thereby ensuring that no malicious changes have been made to the apps.

Can you sign up to the app “anonymously”?

This matters because many people have good reasons for needing to remain anonymous. Having to provide a unique ID of some kind — a cellphone number, email address, etc. — means giving away something that could be used to track you.

Red = No, users must provide some kind of contact details such as an email address or cellphone number. (I’m aware that you could get an anonymous email address or even cellphone numbers. However, I’m not considering workarounds. Even these could be traced.)

Yellow = You must provide an email address or a cellphone number. However, these are provably hashed, and hence they are unreadable by the company.

Green = Yes, you do not need to give away any details in order to use the app.  (I’ve accepted here that you must be uniquely identifiable by the directory server, and hence that some kind of random ID must be assigned to each user in order for the app to work.)

(Hashes are irreversible, one-way encryption functions that can give each cellphone number or email address a unique value that is essentially gibberish. Your cellphone number or email address is hashed on your device, then uploaded to the directory server. In turn, everybody who uses to app has hashes of all of their contacts calculated on their device and then uploaded to the directory server. If two hashes match, then the directory server knows that your contact has the app installed without knowing your (or their) email address or cellphone number.)

Can you manually add contacts without needing to trust a directory server?

Some apps require that you register yourself with a cellphone number or email address. This data is stored on the company’s servers (with one-way encryption [a hash] hopefully). It matches phone numbers and/or email addresses in your contact list (assuming that you allow the app access to it) so that you know who else uses the same app.

However, how do you know that your have been “matched” with the correct person? That the company hasn’t matched you with someone else (e.g., an intelligence agent)? This is especially important the first time that you are “matched”.

Some apps allow you to manually add a contact without needing to trust that a third party correctly matches you. This happens by two people scanning each other’s QR code. Threema does this very well.

This also has the advantage that you don’t need to give your phone number or email address to the company. You can add people anonymously, thereby increasing your privacy.

Red = No.

Green = Yes.

Can you manually verify contacts’ fingerprints?

In order to ensure that you’re talking to whom you believe you are, it’s important that apps support the verification of users’ fingerprints. A fingerprint is a representation of your identity that’s bound to your encryption keys. If you cannot manually verify fingerprints within the app — by scanning a QR code, or by publishing your fingerprint, or by sending your fingerprint via another medium, or simply reading it over the phone — then your messages could be intercepted by what is called a “man in the middle (MITM)” attack.

Alice is sending messages to Bob. Well, she thinks she is messages to Bob; but actually, she is sending messages to Eve, who reads them, and then passes them on to Bob. Neither Alice nor Bob realise this is happening.

Verifying fingerprints ensures that this cannot occur.

Red = No.

Green = Yes.

Could the directory service be modified to enable a MITM attack (especially when first adding a contact)?

When using most messaging apps, you must/can provide a phone number, username, or email address. If a friend of yours has that phone number or email address in their contacts list, the app can automatically add your friend to your contacts in the app itself. (Or perhaps you must add their username manually.) That’s how these messaging apps knows which of your friends is using it, too.

When first adding a contact, it’s possible that a directory service could “match” you with the incorrect person, either maliciously or by mistake. This could mean that while you believe that you are talking to a friend of yours, you are in fact talking to an intelligence agency. Manually verifying each other’s fingerprint wouldn’t raise any concerns since the intelligence agency would be using a valid fingerprint.

This is one way in which Threema has absolutely nailed mutual identification without a directory service. Each person can scan a QR code in the app — either by being physically in the same place, or by each person publishing their QR code somewhere on the Internet. Each person can then manually add the other without the need of the directory service.

Likewise, a directory service means that a third party’s device could be trusted. The same functionality that enables iMessage to send all of your messages to all of your authorised devices could be used to send all of your messages to an untrusted device without you knowing.

Note: This is arguably the biggest weakness in all messaging apps. Even if you’ve manually added a contact, you still must trust that the directory service isn’t doing anything malicious. This could include adding an unauthorised device to your account, giving another user access to your account (the same way in which you can use multiple devices), or temporarily matching you with another user. This is why being alerted when a user’s fingerprint changes is so important. Likewise, it’s important that the server side of the system is open source, too.

Red = Yes, directory services could be used to MITM a conversation.

There are no other options; all apps must trust a centralised directory service.

Do you get notified if a contact’s fingerprint changes?

A contact’s fingerprint changes when they reinstall the app/their phone without having backed up (if it’s possible) their ID and encryption key. If the ID and encryption key were not backed up, or the entire phone was not reinstalled, then the app will regenerate a new ID and encryption key, which is represented by a fingerprint. Hence a new fingerprint will be generated.

However, a contact’s new fingerprint could also be a sign of a man in the middle attack. Hence you should re-verify your contacts if their fingerprint changes.

Red = No.

Yellow = Sometimes, under specific circumstances. (Wire does this if you’ve previously verified a contact’s fingerprint.)

Green = Yes.

Is any personal information (cellphone number, email address, contact list, etc.) hashed?

If data is hashed, it’s unreadable to companies. If, for example, a phone number is hashed, it’s given a unique, irreversible representation that is essentially gibberish. Each phone number will always have a unique representation (hash).

This method can be used to protect contact lists. Instead of uploading a list of your contacts, it’s more secure to upload a hash of each contact. If one of your contacts has the same app, your hashed phone number in your contacts will match their hashed phone number on the company’s servers.

There’s actually no need for companies to have any person information for a secure messaging app. (Threema does this well.) However, apps such as Signal use phone numbers as a unique ID (and send you an SMS to activate the app).

Red = No personally identifiable information is hashed.

Yellow = A limited amount of personally identifiable information (cellphone numbers) is not hashed. All other information, including contacts, is hashed.

Green = All personally identifiable information is hashed.

Does the app generate & keep a private key on the device itself?

In order for end-to-end encryption, the encryption key must be generated and kept on the device itself. If a company has access to the encryption key, then it’s not secure.

Red = No.

Green = Yes.

Can messages be read by the company?

This is pretty self-explanatory. For apps that can do both unencrypted and encrypted messages (Telegram, Google Allo, etc.), I’ve said “Yes”.

Red = Yes.

Yellow = Most likely. There is a significant amount of evidence that indicates that the company can actually read the messages.

Green = No.

Does the app enforce perfect forward secrecy? (At the message encryption level, not transport over networks)

Each message that’s sent should be protected by a unique encryption key (often called a session key). This way if the encryption key on the device is compromised, it doesn’t necessarily compromise past messages (that would have been encrypted with a unique encryption key).

Red = No.

Green = Yes.

Does the app encrypt metadata?

Metadata can include the date and time you sent a message, your location, and to whom you sent a message. (Basically any information about the information that you’re sending.) This is important because this data can reveal an awful lot about you. It’s also targeted by law enforcement agencies.

Red = No.

Yellow = Most metadata is encrypted. However, some pieces of (largely unimportant) information are kept by the company.

Green = Yes.

Does the app use TLS/Noise to encrypt network traffic?

It’s important that al communication between the app and its servers is encrypted over the Internet. This is the same technology that banks, Google, etc. use.

Red = No.

Green = Yes.

Does the app use certificate pinning?

This ensures that TLS connections only happen between the app and the company’s servers. Specifically, the app only trusts TLS certificates that come from the company (the public keys of those specific certificates are “pinned” in the app).

Red = No.

Green = Yes.

Does the app encrypt data on the device? (iOS and Android only assessed)

Encrypting devices (and the data in memory when devices are locked) is important so that the data on them cannot be read without the correct passcode. On iOS, this can be achieved through Apple’s Data Protection API. On Android, it looks as if file-based encryption — the part that encrypts data in memory when devices are locked — is only available on “Nougat”.

Note: I’ve looked for confirmation on iOS that the correct data protection class is being used for each app. The default for third party data is to encrypt it; however, this can be overridden.

Red = No.

Green = Yes.

Does the app allow local authentication when opening it?

Some of the apps provide a form of local authentication — either a password/code or a fingerprint. This provides an extra level of access control to the data that’s held in the app. Note that I’ve only considered functionality when you open the app, not when you access specific chats/settings within the app.

This is separate from authentication — single factor or MFA — on the user’s account.

Red = No.

Green = Yes.

Are messages encrypted when backed up to the cloud?

Some apps offer end-to-end encryption that does not encrypt the messages when they are backed up to the cloud. For example, Whatsapp messages are stored in clear text (readable by Facebook) when iCloud is used to back up a device. Apple encrypt the backup data on iCloud but have a copy of the encryption key (and hence can read your backups, including iMessages). Law enforcement has been known to go after the backed up data when it’s stored at a company.

Note: If a company (that’s you, Apple) have access to the encryption key, I’ve rated this as “No”.

Red = No.

Green = Yes.

Does the company log timestamps/IP addresses?

Some companies (Whatsapp, for example) retain date and timestamp information of messages.

Red = No.

Yellow = Some timestamp/IP address information is stored, although it is not stored for each message sent.

Green = Yes.

Have there been a recent code audit and an independent security analysis?

It’s important that each app has been independently tested. Anyone can create a system that they themselves cannot break. This can also help us trust closed sourced apps, such as Threema and Wickr.

Red = No.

Green = Yes.

Is the design well documented?

It’s important that the clients, APIs, servers, directory servers, and messaging algorithms are all designed correctly. Having design documents published enables experts to check that all of these have been designed correctly.

Note: Even amongst those apps that I’ve rated as “Somewhat”, there’s a big difference in the level of documentation. I might try to further define this in the future.

Red = No. Very little documentation is available.

Yellow = Somewhat. Some documentation is provided.

Green = Yes, documentation — for clients, APIs, servers, directory servers, and messaging algorithms — is provided, and it’s all in one place.

Does the app have self-destructing messages?

This means that messages will be automatically deleted after a certain period of time. Personally, I think that this adds little to privacy since it’s trivial to take screenshots of messages.

I do, however, see some use cases: 1) sending a contact a piece of information that you don’t want to be available forever (a pre-shared key/password, for example), and 2) ensuring that certain parts of conversations are automatically removed.

Red = No.

Green = Yes.