Will AI Short Circuit the Workplace? Some Considerations for the Continued Use of AI By Employers

The increased use of artificial intelligence (AI) in the workplace has already raised issues about working time, proper classification, and discrimination. This alert addresses some of these issues.

Off

Working Time and Classification Issues

The Fair Labor Standards Act (FLSA) has defined the 40-hour workweek and other compensable time for non-exempt employees since The War of the Worlds put Martians on the radio. AI was barely on science fiction’s mind. Now, AI is an everyday reality, and is especially plugged into the workplace. From recording employee productivity to maximizing efficiency in human resources (HR), AI holds tremendous potential for employers. But AI’s capabilities and detriments are also testing the limits of the FLSA and analogous state laws.

Additionally, as AI develops and its workplace presence increases, multiple unanswered questions regarding compensable time and inadvertent discrimination arise. Below, we outline potential legal issues that may occur as AI use expands in the employment network.

Using AI to Surveil and Monitor Employees

The COVID-19 pandemic expedited the use of AI-powered surveillance and monitoring tools for employee computer activity (and impromptu mouse jigglers as workarounds). Now, AI monitors the keystrokes, mouse activity, and/or webcams of many remote and in-person employees. Through that AI surveillance, employers can calculate a non-exempt employee’s compensable time without relying on the traditional clock-in-and-out.

But what happens when the entirety of an employee’s job is not wired to a desk or computer? For instance, an AI-monitor might not account for an employee’s time if the employee prints and reads reports away from their workstation. Similarly, if the employee meets a client in-person, the AI may not be able to capture that time.

If the employer glances at the AI-manicured timesheet, the employer might assign more at-station work for the employee. But once the unaccounted time is reported, and the at-station time tallied, that employee’s time could pass the 40-hour threshold for overtime pay.

The Band-Aid fix would be to make the employee responsible for recording their off-station time. However, the FLSA generally places the burden on the employer to accurately record all employee hours worked by non-exempt employees. So, as the AI technological and legal landscape evolves, the question remains of how to fully account for an employee’s time when AI monitoring is employed.

Employees Losing Their Exemption Status Due to AI

AI and robots might not be replacing all workers, but they may be changing employees’ exemption statuses. Currently, an employee may be exempt under the FLSA and analogous state laws under some tests if the employee exercises discretion and independent judgment regarding significant matters.

But what happens when the exempt employee only manages AI that chooses the best outcome in its technologic wisdom? If the employee retains no discretion, then they may no longer be exempt. Yet, in this early stage of AI where the AI has no ethical or moral sense, employers might still need an employee in the driver seat to make appropriate choices. Those choices may be considered discretionary for purposes of exemption under the FLSA and analogous state laws. However, the true amount and level of employee discretion remains unclear.

Other novel questions that affect exemption status likewise remain open: is it possible to hire or fire a robot for purposes of managerial exemptions? Does an employee “manage” a robot by adding new directives or by performing physical or technical maintenance? Also, exemption statuses may be impacted by state and local laws which further muddles these scenarios.

Changing into Wearable Technology Might Be Compensable

Generally, time spent changing into gear that is integral to the work, such as scrubs at a pork processing plant, is compensable under the FSLA. As wearable technology in the workforce becomes commonplace, the time to put on the tech may become compensable. (i.e. The half-hour spent by a warehouse employee to strap on an AI-integrated exoskeleton [especially in the bulky nascent stage of exoskeletons] might be compensable.) However, this raises additional issues. For example, an employee’s lunch break is unpaid if the break is uninterrupted by work. Would wearing cumbersome technology be considered an interruption? Or would the employer compensate the employee to change out of, and then back into, the gear?

On the other hand, donning items such as a hardhat or an AI-imbued watch might not be compensable due to the insubstantial time it takes to put on.

Turning Commute Time in an Autonomous Vehicle into Compensable Time

Compensable time during an employee’s drive to work was a non-issue with manually driven cars. Employees are generally not compensated for the time spent commuting before the start of the workday and after the end of the workday. For instance, a retail employee’s compensable time generally starts after clocking in at the store. Similarly, an office worker’s compensable time begins after arriving at the office and performing their first task.

With the accelerated development of autonomous vehicles (AV), however, drivers might one day be legally allowed to take their eyes off the road. And, for some employees, their eyes would fixate on their work during the commute. Meaningful work performed during that commute could then count towards the 40-hour threshold for non-exempt employees. De minimis contributions, on the other hand, might not be compensable. But the line between what kind of work could be compensable is not clear and leaves the issue open.

Potential for AI’s Algorithmic Discrimination

Gone are the days of dredging through stacks of resumés or, for some companies, synchronous interviews. AI is utilized to weed out candidates who lack the qualifications set by the hirer. But not all HR dreams come true. Without proper data application, parameters, and oversight, AI processes can perpetuate discrimination and violate federal or state law. One blatant example is EEOC v. iTutorGroup, where an AI hiring tool automatically rejected women over 55 and men over 60. Less blatantly, in Baker v. CVS Health Corporation et al, it is alleged that CVS used AI facial recognition software to make credibility assessments akin to a lie detector, which is illegal in Massachusetts. Another interesting question has recently arisen: can third-party AI vendors be held liable for discrimination under Title VII? In Mobley v. Workday, Inc., Workday is being sued in federal court for selling employee-screening AI that is allegedly discriminating against ethnic and older candidates.

Takeaways

Ultimately, it is still critical for the employer to be wired into the hiring process. In AI’s current stage, there is a large potential for algorithmic discrimination if the wrong boxes are checked or the incorrect data is used to control the AI’s decision making. Potential candidates may fly under the radar — and bring suit over perceived or actual discrimination — if hiring decisions are left to AI’s complete discretion.

The future of AI in the hiring process may eliminate these issues if vendors can incorporate oversight and other features. To help cure discriminatory issues, the EEOC and other entities have released guidance on using AI in the hiring process. But, for the time being, the types of AI allowed in the hiring process, and even if applicants must be informed that AI is being employed, is an open-ended question.

Contacts

Continue Reading