Stock photo: Data Privacy

As governments lift "shelter-at-home" orders, employers face difficult decisions relating to the fundamental concern of keeping their employees safe. Some are evaluating artificial intelligence (AI) solutions to limit occupational exposure to Covid-19. Use of AI, in combination with other measures, could accelerate the safe return of employees to the workforce.

Of course, willingness to share health data will impact AI's effectiveness. The public's tolerance for sharing personal data for the greater good may be heightened in the short term as we continue developing harm-reduction technologies to battle and fight spread of the virus. But a long-term view of employee privacy will be imperative for continued successful integration of AI in the workplace.

 

Emerging Technologies Help Employers Safely Return Employees to Work

One way AI can assist in successful return-to-work strategies is through contact tracing and predictive modeling to help halt the spread of the virus. AI-driven algorithms can scour meeting invites, email traffic, business travel, and GPS data from employer-issued computers and cell phones to give employers advance warnings to avoid certain danger zones or to quickly halt a potential outbreak at a location.

Other AI tools used by employers help test, diagnose, and otherwise monitor employee health. One platform is a "fitness for duty" application encompassing a digital health survey, which asks employees for personal information such as health status or recent travel. The data from these programs may be used to build analytical models, such as a public dashboard for employers to monitor the spread of Covid-19 within their company. These programs assist employers in managing continuity of business and navigating the uncertainty of Covid-19.

Biometrics data also fuels social distancing and heat-detection cameras, some of which are paired with facial-recognition software employers can use to track and identify the suspected unwell. For instance, one company has created camera software that rings a buzzer or alerts security staff when two people stand less than six feet apart. Another company has created an AI-based camera solution that can scan groups to detect and identify anyone with an elevated temperature in real time. This platform can help keep employees safe and means organizations do not have to slowly check people one-by-one for symptoms of Covid-19.

The effectiveness of many AI technologies will depend on the utilization of connectivity that extends beyond the scope of cellular networks. As a result, innovators are turning to Bluetooth and other alternatives. This will expand the scope of the information that may be gathered by AI.

 

Privacy Considerations with Adoption of Covid-Related Technologies

While the use of these tools may help reboot our economy, use by employers raises privacy compliance issues, as they involve the collection, use, aggregation, analysis, and disclosure to third parties of highly personal information, such as biometric data, personally identifiable information (PII), or geolocation data. A poll of about 2,000 people revealed that although more than half of Americans support anonymized government smartphone tracking, data privacy is one of the biggest challenges facing companies.

One of the threshold issues for consideration when implementing AI in the workplace is whether the data will be assigned a pseudonym, de-identified, or aggregated. On May 7, 2020, U.S. senators formally introduced the "Covid-19 Consumer Data Protection Act," which contains protections for personal health, geolocation, and proximity data during the pandemic. The bill proposes a series of requirements regarding the disclosure, consent, collection, and destruction of personally identifiable data that is collected for Covid-19–related purposes by specific entities. Moreover, the bill exempts information that is aggregated, de-identified, or publicly available from the definition of "covered data."

Data that is not personally identifiable is also exempt under currently enacted laws. For example, the European Union's (EU's) General Data Protection Regulation (GDPR) makes the distinction between pseudonymous data and anonymous data in its definition of what is "personal data" governed by the law. The GDPR does not restrict the use of anonymous data. Further, within the United States, the California Consumer Privacy Act (CCPA)—which many view as an American state variation of the GDPR—permits businesses to collect, use, retain, sell, or disclose information that is de-identified or aggregated, subject to meeting technical specifications.

If employers are collecting data that is personally identifiable, they should carefully review with counsel their privacy disclosures and related policies to determine whether employees have already provided consent for the collection of personal data that the new AI technology will use, or whether new consent is required.

Currently, laws governing consent for collection of personally identifiable data vary by type of data and by state. For example, under the CCPA, an employer needs to provide new notice before it collects new categories of personal information from its employees. Further, if personal information is used for a new, previously undisclosed purpose, then the employee must provide "explicit" consent. The CCPA does not apply to information governed by the Health Insurance Portability and Accountability Act (HIPAA) or the state equivalent. However, it does apply to biometric information; Internet activity; geolocation data; and audio, electronic, visual, thermal, or similar information. And other states are considering enacting similar laws.

To address privacy concerns, some companies are developing technologies that integrate cryptology and decentralized networking to permit users to limit and control the disclosure of their data. For example, Coalition, a global contact tracing app for smartphones, encourages individuals to self-report their Covid-19 status. Through cryptology it provides each individual with a temporary, anonymized ID that is tied to their cell phone. The app collects and locally stores the cell phone user's interactions with other anonymized IDs through Bluetooth technology. If a person self-reports that they are infected with Covid-19, the app uses decentralized, local cloud technology to alert users of their contact with an infected individual.

Participants are never identified; only their anonymous IDs are used. Innovators are working on further anonymizing the user process by developing an anonymous token ID system. The app's use of a local, decentralized network that is based on proximity and interactions means it does not need to collect and store location data, build movement profiles, or maintain identifiable features of users' contact information and end devices. Efforts like this demonstrate the power of advanced technologies to reduce risk during this pandemic, while permitting individuals to remain agents of their own data.

 

Recommended AI Implementation Practices

Before implementing such solutions within the workplace, employers should be sure they understand how the AI system is collecting information, the purpose of the collection of this information, and how the information will be stored. Employers should ask whether the technology is collecting the right data or too much data, how long the data will be stored, and who has access to the data being collected (i.e., the developer, the government, or other third parties). Only after fully understanding all of these aspects can employers accurately analyze the risks and legal implications of deploying a Covid-19–related AI platform.

Further, just because a product has multiple functionalities does not mean that an employer should use all of them. For example, an employer should consider whether it is necessary to track all employee movements, as opposed to just tracing contacts. Another crucial aspect of vetting the technology is understanding the security mechanisms embedded in the systems. This includes the use of encryption, pseudonymization, and anonymization, where appropriate. Companies should have their IT security teams analyze the AI program as part of the vetting process.

Another important step in using Covid-19–related AI technologies is creating procedures and implementing policies related to the use of the platform. For instance, employers should form a team of those who will have access to the information collected and exclude all others. Similarly, employers should develop strict confidentiality guidelines around the use of any information collected from the technologies. Companies should also regularly audit the technology to ensure it is not creating an adverse impact on any protected classes.

Finally, companies using employment-related AI should continuously monitor laws and regulations in the jurisdictions in which they do business and ensure they comply with all that are applicable, as well as confer with counsel who can help navigate the various nuances of each law. Striking a balance between maintaining employees' privacy and safeguarding their health will be difficult, but it can be achieved with the right technology and suitable understanding of the applicable laws and regulations.

 


Natalie Pierce is a Shareholder at Littler in the San Francisco office. She is Co-Chair of the firm's Robotics, AI, and Automation practice group.

Julie Stockton is an associate at Littler in the firm's San Francisco office.

Courtney Chambers is an associate at Littler in the firm's San Francisco office.

 

 

From: Legaltech news

NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.