Page 1 of 1

The 10 Things All Employers Must Include in Any Workplace AI Policy

June 19 - Posted at 9:32 AM Tagged: , , ,

Whether your organization has deployed a generative AI tool for your employees or hasn’t (yet) hopped on the bandwagon, the time is now for you to create a workplace policy governing the use of the technology. Many organizations are exploring ways their workforces can harness the revolutionary advances in productivity, efficiency, and creativity that generative AI (GenAI) products like ChatGPT, Google’s Bard, or Microsoft’s Bing can bring. And, even if you aren’t doing the same, your employees almost certainly are. But how can you do so in a responsible way? A first step is developing a workplace GenAI policy. Read on for the 10 things you should include.

First Things First: What is GenAI?

Recent advances in GenAI, kicked off most prolifically by the release of ChatGPT and soon thereafter joined by Bard and the reformulated Bing, have captured the attention of a broad audience. Almost immediately, employees and employers alike began thinking about the ways the technology can be used in the workplace, seemingly limited only by our imaginations. How is this different, however, than other forms of AI which have been around for years?

You’ve probably been living and working with some form of AI for some time now. Does your company have any sort of chatbot providing answers to simple questions about where to find a resource? No doubt you have an auto-complete feature when you type into your email platform. Those are all examples of artificial intelligence – just in rudimentary form.

Unlike this prior technology, GenAI is able to generate original human-like output and expressions in addition to describing or interpreting existing information. In other words, it appears to “think” and respond like a human. However, GenAI is limited by the data upon which it was trained, and will not have the judgment, strategic thinking, or contextual knowledge that a human does. These and other technological limitations and risks are why having a sound GenAI policy is so important.

The 10 Components Your GenAI Policy Should Include

While each company should customize a GenAI policy to suit its own needs and priorities, there are 10 topics to consider at a bare minimum. 

  1. Outline the Policy’s Purpose and Scope. The first thing your policy should include is a description of its purpose and scope.
    • Generally, the purpose of the policy would be to provide guidelines for the organization’s development, implementation, use, and monitoring of GenAI in the workplace.
    • The scope should clearly tell your employees which areas are covered – and here’s where you need to be very clear. If your organization does not provide any GenAI products for your employees, you still need to make clear that the policy covers the use of any third-party or publicly available GenAI tool, such as ChatGPT, Bard, Bing, DALL-E, Midjourney, and other similar applications that mimic human intelligence to generate answers, work product, or perform certain tasks. And, if you do supply your workers with a GenAI product, you should make sure your policy covers appropriate usage with specific circumstances kept in mind.
  2. Maintain Data Privacy and Security. Data privacy and security is a critical component of any GenAI policy, particularly given the strict data privacy laws that exist and are developing across the country (and internationally). Your policy should create safeguards to protect the data inputted into any GenAI technology, addressing data collection, storage, and sharing. At a minimum, you should prohibit your employees from entering private or personal information into any GenAI platform.
  3. Uphold Company Confidentiality. You will also want to ensure that your company’s most important data – trade secrets, private information, PII of your employees and other third parties, confidential data, sensitive matters, etc. – are kept far away from GenAI. This will not only help you avoid embarrassing or damaging situations, but could help you uphold and defend the confidential nature of this information if you ever find yourself in trade secrets litigation.
  4. Ensure Your Commitment to Diversity is Not Compromised. GenAI has the capacity to provide information that violates not only your company’s strong diversity goals but also legal anti-discrimination standards. For this reason, you should plainly state that information received from GenAI needs to be double-checked using reasoned human judgment to ensure it does not run afoul of your company’s commitment to diversity.
  5. Prohibit Employment-Based Decisions Aided by GenAI. Related to the previous point, you should clearly state that GenAI tools should not be used to make or help you make employment decisions about applicants or employees. This includes recruitment, hiring, retention, promotions, transfers, performance monitoring, discipline, demotion, terminations, or other decisions. Of course, if you have deployed a GenAI tool to help you navigate human resources activity, you can allow such use – so long as you are confident that your policies surrounding the use of that specific tool uphold legal principles (including any relevant state laws) and your highest company standards.
  6. Prevent Copyright or Other Theft Concerns. The danger always exists that information provided from GenAI platforms (including image generators like DALL-E 2 and Midjourney) could include details from copyrighted or other protected sources. If your employees use a GenAI tool to generate content for company use that happens to include information from a protected source, you could find yourself in legal hot water. Make sure your employees know not to pass off any information or content received from GenAI as their own without double-checking sources. They should use GenAI as an idea generator, not as a replacement for content creation.
  7. Outline Best Practices. The inherent limitations related to GenAI in its current form should lead you to outline best practices that should be followed whenever your employees use the technology to aid their work. Some examples:
    • Because GenAI is prone to providing hallucinations and outdated answers, you should require your workers to confirm any information received before relying on it in any capacity.
    • There is always risk of a data breach involving any GenAI provider. While your workers might think they can input any question into the system with complete anonymity, they should be warned to treat any information provided into any GenAI platform as if it will go viral on the Internet –with their name and your company’s identity along with it.
    • Your company’s supervisors should want to know when – and to what extent – GenAI was used to help complete a task. For that reason, you might recommend that employees disclose when they are using the technology and the extent it aided the creation of any content they develop.
  8. Be Clear About Consequences. As with any company policy, you should inform your employees that they could face repercussions should they violate any of its tenets. Let them know they could face disciplinary action – up to and including immediate termination, and possibly legal action – should they violate the policy. The policy should also direct employees to report potential violations they learn about to their supervisor or to HR.
  9. Include a Disclaimer. Due to the broad reach of the National Labor Relations Act – even over companies that don’t have a unionized workforce – you need to make sure that you offer protections for behavior that has been upheld as protected by the NLRB and courts. Work with your legal counsel to create a disclaimer that clearly states your policy is not intended to interfere with, restrain, or prevent employee communications regarding the rights protected by federal labor law.
  10. Input From Across the Spectrum. Finally, make sure you include multi-disciplinary input from stakeholders across the organization to ensure your policy is comprehensive, effective, and realistic. Obvious stakeholders include members of your regulatory, compliance, IT, DEI, and legal departments.

What’s Next?

Once you publish your GenAI policy, your work is not over – it’s just beginning. Besides the all-important work of training your workforce on the policy parameters and ensuring continued and consistent enforcement, you should use the creation of a policy as the right time to establish a framework for GenAI governance and oversight.

Please let us know if you would like a copy of a complimentary GenAI policy provided courtesy of Fisher Phillips LLP.

EEOC’s Latest AI Guidance Sends Warning to Employers

May 23 - Posted at 9:00 AM Tagged: , , ,

Employers using or thinking about using artificial intelligence (AI) to aid with workplace tasks received another reminder from the federal government that their actions will be closely scrutinized by the EEOC for possible employment discrimination violations. The federal agency released a technical assistance document on Thursday warning employers deploying AI to assist with hiring or employment-related actions that it will apply long-standing legal principles to today’s evolving environment in an effort to find possible Title VII violations. What are the five things you need to know about this latest development?

1. EEOC Confirms That Employers’ Use of AI Could Violate Workplace Law

The EEOC started by confirming its crystal-clear position in its technical assistance document: an improper application of AI could violate Title VII, the federal anti-discrimination law, when used for recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, or dismissal. The EEOC outlined four instances where the use of AI during the hiring process – and one example during an employment relationship – could trigger Title VII violations:

  • resume scanners that prioritize applications using certain keywords;
  • “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns;
  • testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test; and
  • employee monitoring software that rates employees on the basis of their keystrokes or other factors.

The agency didn’t say that these are the only types of workplace-related AI methods that could come under fire – or that these types of tools are inherently improper or unlawful. It did say, however, that preexisting agency regulations (the Uniform Guidelines on Employee Selection Procedures) that have been around for over four decades can apply to situations where employers use AI-fueled selection procedures in employment settings.

The agency said this is especially true in “disparate impact” situations – where employers may not intend to discriminate against anyone but deploy any sort of facially neutral process that ends up having a statistically significant negative impact on a certain protected class of workers.   

2. “Four-Fifths Rule” Can Be Applied to AI Selections

The EEOC pointed out that employers can use the “four-fifths” rule as a general guideline to help determine whether an AI selection process has violated disparate impact standards (and we apologize in advance for the impending use of math). The test checks to see if a selection process is having a disparate impact on a certain group by comparing the selection rate of that group with the most “successful” selection rate. If it’s less than four-fifths of that selection rate, then you might be subject to a disparate impact challenge. If that sounds confusing to you, here is the example provided by the EEOC.

Assume your company is using an algorithm to grade a personality test to determine which applicants make it past a job screening process.  

  • 80 White applicants and 40 Black applicants take the personality test.
  • 48 of the White applicants advance to the next round (equivalent to 60%).
  • 12 of the Black applicants advance to the next round (equivalent to 30%).
  • The ratio of the two rates is thus 30/60 (or 50%).
  • Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule says that the selection rate for Black applicants is substantially different than the selection rate for White applicants – which could be evidence of discrimination against Black applicants.

Note, however, that the EEOC said that this kind of analysis is merely a rule of thumb. It’s a rudimentary way to draw an initial inference about the selection processes. If you end up finding problematic numbers, it should prompt you to acquire additional information about the procedure in question, according to the EEOC, and isn’t necessarily indicative of a definitive Title VII violation. Similarly, just because your numbers clear the four-fifths hurdle doesn’t mean that the particular selection procedure is definitely lawful under Title VII. It can still be challenged by the agency or a plaintiff in a charge of discrimination.

3. EEOC Encourages Proactive Self-Audits

In a statement accompanying the release of the technical assistance document, EEOC Chair Charlotte Burrows said that employers should test all employment-related AI tools early and often to make sure they aren’t causing legal harm. This doesn’t mean just using the four-fifths rule, but also using a thorough auditing process involving a variety of potential examination methods on all AI functions. “I encourage employers to conduct an ongoing self-analysis to determine whether they are using technology in a way that could result in discrimination,” she said.  

But not mentioned by the EEOC: a reminder that you should approach any self-audit with the help of legal counsel. Not only can experienced legal counsel help guide you about the best methodologies to use and assist in interpreting the results of any audit, but using counsel can help cloak your actions under attorney-client privilege, potentially shielding certain results from discovery. This can be especially beneficial if you identify changes that need to be made to improve your process to minimize any unintentional impacts.

4. You’re On the Hook For Problems Caused by Your AI Vendors

The agency also noted quite clearly that you can’t duck your responsibilities by using a third party to deploy AI methods and then blaming them for any resulting discriminatory results. It said that you may still be responsible if the AI procedure discriminates on a basis prohibited by Title VII even if the decision-making tool was developed by an outside vendor.

“In addition,” said the EEOC, “employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employer’s behalf.” This may include situations where you rely on the results of a selection procedure that an agent administers on your behalf.

The EEOC recommends that you may want to specifically ask any vendor you are considering to develop or administer an algorithmic decision-making tool whether steps have been taken to evaluate whether that tool might cause an adverse disparate impact. And it also recommends asking the vendor whether it relied on the four-fifths rule of thumb or whether it relied on a standard such as statistical significance that is often used by courts when examining employer actions for potential Title VII violations.

5. EEOC’s Guidance is Part of Bigger Trend

This technical assistance document is part of a bigger trend we’re seeing from federal agencies that are increasingly interested in the ways that AI may lead to employment law violations. Just last month, in fact, EEOC Chair Burrows teamed up with leaders from the Department of Justice, the Federal Trade Commission and the Consumer Financial Protection Bureau to announce that they would be scrutinizing potential employment-related biases that can arise from using AI and algorithms in the workplace.

And within the past year, the EEOC teamed up with the DOJ to release a pair of guidance documents warning that relying on AI to make staffing decisions might unintentionally lead to discriminatory employment practices, including disability bias, followed by the White House releasing its “Blueprint for an AI Bill of Rights” that aims to protect civil rights in the building, deployment, and governance of automated systems.

While none of these guidance documents create new legal standards or can be relied upon with the force of law like a statute or regulation, they do carry weight, may signal where the agencies are focusing their enforcement efforts, and can be cited to by agencies and plaintiffs’ attorneys as best practices that employers should follow.  And states have gotten into the action too, with New York City’s law set to take effect in July, and a new bill advancing towards the Governor in California. And for that reason, you should take this guidance seriously and adapt your employment practices as necessary to stay up to speed with the pace of change that is rapidly unfolding before our eyes.

 

Beyond Job Descriptions: 6 HR Tasks ChatGPT Can Do for You

March 24 - Posted at 8:29 AM Tagged: , ,

Since ChatGPT’s launch in November 2022, many HR professionals have used the generative artificial intelligence tool to perform some of their daily tasks. While anxiety remains about “robots taking our jobs,” ChatGPT can make HR professionals more productive, freeing them up from repetitive tasks and allowing them to spend more time on strategic work. However it still needs to be used selectively and with caution to avoid a costly mistake.

ChatGPT as an HR Tool

Like any emerging technology, ChatGPT offers both benefits and risks. Using it effectively requires a willingness to learn and experiment. “ChatGPT saves me hours of work every week and boosts my productivity,” said Declan Daly, managing partner at Bundoran Group, a recruitment agency. “I’m constantly discovering new ways to use it in my work.” 

Caroline Reidy, managing director of The HR Suite, an HR services firm, shares Daly’s enthusiasm for ChatGPT: “You might not get perfect results every time you use it, but generating a quick, working draft with ChatGPT can significantly reduce the time you spend on document development and other administrative tasks.”

However, relying uncritically on ChatGPT without performing a careful, human review of its generated content has some large potential risks. ChatGPT’s generated content may sound reliable, but it’s also generic and historical. Generative AI can synthesize what others may have said in the past, but it can’t offer specific guidance about what your company should do now in a specific circumstance. Organizations will always need HR professionals who can do their own thinking.

Common HR Tasks ChatGPT Can Perform

People are already paying attention to ChatGPT for its ability to write job descriptions. LinkedIn, for example, just announced it will soon introduce a feature enabling AI-written job posts. Here are six other HR tasks the tool can help HR professionals perform:

1. Recruiting. “You can use ChatGPT to generate relevant interview questions to ask candidates for roles you aren’t familiar with.You can also ask for the average salary for specific jobs or common benefits that are offered by other industries for a particular role, narrowed down by geography.

2. Onboarding. HR professionals can set up ChatGPT to give real-time support to new hires by answering questions about company policies, procedures and benefits, as well as offer them guidance on completing necessary paperwork.

3. Administrative tasks. ChatGPT can help HR professionals craft and send announcements and reminders to employees about events, such as training programs. The AI tool can also be used to write all sorts of documents (from handbooks to policy memos and beyond), as well as send automatic e-mail responses.

4. Employee self-service. ChatGPT can be leveraged to build conversational chatbots, providing instant support for common questions about benefits, vacation policies and payroll. More complex employee issues can be escalated from self-service tools to an HR professional for follow-up. The human follow-up could be blended with ChatGPT as well.

5. Employee surveys. You can ask ChatGPT to craft survey questions for measuring employee engagement. ChatGPT enables you to conduct companywide polls to gauge opinions on specific workplace issues, such as the pros and cons of hybrid work and the viability of a four-day workweek.

6. Performance reviews. ChatGPT can help with performance management by supplying HR professionals and managers with instructions on how to conduct performance appraisals and by responding to inquiries from employees about performance metrics.

Maximizing ChatGPT’s Benefits, Reducing Risks 

ChatGPT’s generated content comes from the Internet and can be inaccurate or biased. For example, if the source data ChatGPT scours through says “The moon is made of yellow cheese,” its generated content would reflect that. HR professionals can provide ChatGPT with detailed source information, including employee data, internal company knowledge bases and HR policies or procedures, to generate customized content and answer questions.

Another challenge in using ChatGPT is that generated content can have the wrong tone. “As an HR professional, you sometimes work on sensitive topics where automating replies might work against you,” said Ryan Faber, founder of Copymatic, an AI-based business writing platform. “Sensitive tasks such as layoffs and terminations should never be handed over to ChatGPT, because human empathy and nuance are required.”

Finally, the generated content might not comply with data privacy or other legal standards in HR. Again, be sure to review what ChatGPT writes to make sure the content is useable and compliant.

Trust, But Verify

What former President Ronald Reagan once said about negotiating with the Soviet Union also applies to using ChatGPT: “Trust, but verify.” Many HR professionals recommend using ChatGPT as a starting point, but would still speak to an expert or refer to another data source for verification of what the tool generates. 

Being able to use ChatGPT effectively, and with the right safeguards and controls in place, will become an essential HR skill moving forward. It will take some time for HR professionals and organizations to become good at using the tool.

At the end of the day, ChatGPT is an important HR tool that should be deployed critically and selectively for HR tasks, with a clear-eyed understanding of its strengths and weaknesses.

© 2024 Administrators Advisory Group, Inc. All Rights Reserved