AI Company Left 2.5M Personal Medical Records Exposed

This piece was written in collaboration with Editorial Team.

Exposure of Medical Data

In the past month of July 2020, Jeremiah Fowler, a security researcher with SecureThoughts, discovered 2.5 million records containing sensitive medial data and Personally Identifiable Information (PII). Artificial intelligence company Cense AI left two folders exposed with millions of medical records available to practically anyone.

The information included names, insurance and payment records, and medical diagnoses notes. While not a direct breach, it is impossible to know if anyone accessed this information with malicious intent. The database was accessible from any browser allowing anyone to edit, download or delete data.

The data was validated and was informed with a disclosure notice. The data appeared to be individuals' insurance information from auto insurance providers. Some of the names matched individuals living in the NY area. While there was no direct comment from Cense AI, the public folders have since been restricted ensuring no further data is left exposed. It is thought that the folders were temporary staging folders (storage repository) awaiting to be loaded into the bot/management system. Ultimately it is not known how long this information was available to the public.

This is not the first data leak out of an artificial intelligence company this year. Earlier in February, news broke out of a data breach at AI-powered facial recognition company Clearview. While the company itself gave little details of how the sensitive information was leaked, it did confirm that hackers gained access to the firm's clients, and what those clients were searching for. The major controversy was that Clearview's clients are overwhelmingly law enforcement utilizing highly sensitive information.

Healthcare Data is Often a Target

The information discovered included names, payment records, and medical notes – a clear violation of HIPAA laws. HIPAA is the Health Insurance Portability and Accountability Act passed in 1996 and since has been amended with the intent of improving privacy protections for patients and health plan members. Most importantly HIPAA is meant to safeguard and protect the data and privacy of patients. Since its inception, regulators have initiated strict HIPPA regulations governing the privacy and security of healthcare records.

On the dark web medical records can fetch a price as high as 45x that of credit card data and therefore is highly sought after. Companies and organizations trusted with sensitive information must do a better job in securing such data. Susceptibility can come from outdated or inefficient security systems that have not been updated to face modern cybersecurity threats. In this case, it was simply negligence storing the information in plain text.

According to, network server breaches accounted for 30.6 million individual's records exposed in 2019. This discovery equates to almost 8.2% of the total amount of exposed records compared to 2019 alone. Jeremiah is “in no way implying that there was any violation or that Cense has violated any legal data breach notification requirement,” but companies must comply with these regulations or risk facing steep fines.

This is not the first and nor will it be the last leak of healthcare data, but these facts will hopefully raise awareness of this issue. It is the law in many states that those affected have the right to know when their information has been involved in a security breach exposure. Currently it’s unclear if any authoritative entities or impacted individuals have been notified of the Cense data exposure. 

Can We Trust Artificial Intelligence to Handle Information?

Many of the most forward thinkers in the area of technology see artificial intelligence as the “fourth industrial revolution” in which will see wide scale automation in nearly every facet of our lives. There is little doubt that artificial intelligence will be an important part of our lives, but it doesn’t answer the ethical and moral dilemmas that will arise alongside it.

The potential for abuse in the field of AI is great. As security researchers, our goal is to ethically help secure exposed data and responsibly notify the owners of the data before it is compromised, stolen, or wiped out by ransomware.

A number of non-profit groups, technology leaders, and companies have banded their resources together to create a kind of ‘consensus’ on best practices with artificial intelligence. One of the hot topics is AI’s handling of sensitive information. The partners produce their own research and post their own solutions for these looming questions. One of the largest of these groups is The Partnership on AI, boasts partners such as Apple, Google, and Facebook. This shows a positive step forward in consensus-based regulation of this critically important industry.

What Does the Future Hold for AI Management?

The digital age has had a huge impact on information management. The simplicity in which computers save and transfer information instantaneous is astonishing. Innovation through AI promises productivity levels previously unheard of and looks to propel practically every industry in some way or another. However, the following questions must be posed:

With artificial intelligence clearly at the forefront of information technology, should some things be off-limits?

Is there a downside to this?

How secure is our personal information with artificial intelligence?

If you believe your information may have been exposed by, you can contact them here. Furthermore, if you are concerned your information may have been exposed on the internet, you can also check out free sites like

Post a Comment