Getting your Trinity Audio player ready...
|
Life as a career development professional (CDP) is busy. We juggle multiple responsibilities – supporting clients, collaborating with employers, meeting managerial expectations and adhering to funder requirements. Through this, we constantly make decisions that shape our clients’ careers. While some choices may seem minor, they can have far-reaching impacts. Ethical decision-making should be at the core of these decisions.
Now, new ethical considerations are emerging with the growing presence of artificial intelligence (AI), large language models (LLMs) and machine learning in career services – whether through AI-powered job-matching platforms, automated resume screening or virtual career coaching tools. We need to critically evaluate these technologies’ accuracy, biases and limitations. Our ethical considerations must evolve to balance client autonomy with professional guidance, ensure equity in service delivery and maintain confidentiality in an increasingly complex digital world.
Ensuring that evolving service models align with our ethical commitments is essential to maintaining integrity in our practice. Formal ethical codes, such as those from the CCDF and NCDA, provide structured guidelines on confidentiality, client autonomy and equity. While these codes are foundational, they can be overlooked in fast-paced environments with pressure to meet funder and managerial expectations. Additionally, they may not fully address the nuanced dilemmas CDPs face, especially in an AI-driven landscape.
Ethics can feel abstract in the face of pressing client challenges like job loss or systemic barriers, but there are concrete steps you can take now to navigate these challenges with confidence. Here, we’ll explore practical strategies and ethical considerations to help you integrate AI-driven tools responsibly into your practice.
Informed consent
A core tenet of ethical career development practice is informed consent. It’s more than a checkbox or signature; it’s about transparency, trust and empowering clients to make informed decisions. From the first meeting, we set the tone by clearly outlining what we offer, how the agency will use information and any risks involved. This information helps manage expectations, builds rapport and ensures clients feel in control of the process, especially when sharing information with external partners like employers or service agencies.
However, advancements in AI and LLMs, such as ChatGPT and Co-Pilot, introduce new complexities to informed consent. Many clients – and, if we are being honest, CDPs – may not fully understand how AI-driven assessments, job-matching algorithms or applicant-tracking systems process their data.
To support clients effectively, CDPs must understand how these technologies work, including how they collect, analyze and interpret data. AI tools such as ChatGPT are trained on vast datasets, often reflecting societal patterns and existing inequalities; biases can emerge from the data itself. AI-driven recruitment tools may unintentionally reinforce certain stereotypes held by a company –for example, if the company typically recruited white males, from a certain Ivy-league school. Additionally, the algorithms or the process used by many AI-driven recruitment tools to make decisions prioritize recognizing specific patterns, which can sometimes overlook individuality and reinforce systemic barriers. CDPs are uniquely positioned to advocate for more transparent AI systems and ethical AI policies in hiring and career services due to the shared link they have between jobseekers and employers.
To help clients navigate informed consent with these AI tools, CDPs must explain, in plain language, where AI is embedded in career services – from resume scanners and job-matching algorithms to applicant tracking systems (ATS) and AI-driven interview tools. Some quick and easy steps to support this include:
- Encouraging clients to read and question privacy policies
- Teaching them how to opt out of AI-based decisions when possible
- Providing guidance on manually tailoring resumes to counteract AI filtering biases
From there, we can continue working with clients to recognize potential biases that have been reported through media outlets and organizations such as the Privacy Commissioner of Canada that monitor AI use. We can also empower them to assess job recommendations and automated rejections critically. By fostering AI literacy and ethical awareness, CDPs ensure that clients can engage with these technologies while protecting their rights and career opportunities.
“CDPs are uniquely positioned to advocate for more transparent AI systems and ethical AI policies in hiring and career services due to the shared link they have between jobseekers and employers.”
Confidentiality
Confidentiality is one of the most significant challenges faced by CDPs; our understanding of AI tools plays a crucial role in managing this issue effectively. CDPs often assist clients with tasks such as uploading resumes to application sites, using AI-powered grammar tools and accessing online assessments. However, how these AI platforms handle client data can unintentionally violate the confidentiality practices upheld by the agencies we represent.
AI tools such as Gemini, ChatGPT and Siri leverage user data to train and refine their performance. For instance, while large language models (LLMs) like ChatGPT don’t typically store personal information beyond the immediate interaction, if a user is logged in, their data – including chats, uploaded documents and personal identifiers – may be stored for longer by the host platform (e.g. OpenAI). Furthermore, many AI-powered platforms default to collecting user interaction data, such as preferences or voice commands, to personalize experiences and optimize features.
Given this, CDPs should encourage clients to review their privacy settings on these platforms and turn off data-collection features to ensure their personal information remains secure. This proactive step is vital in protecting client confidentiality and maintaining trust in our services.
Access
Client access is another critical ethical concern, especially as AI and digital tools become more embedded in career services. As CDPs, we work with diverse clients in various settings. It’s essential to recognize that not everyone has the same level of access to job opportunities, career services or the digital tools that increasingly drive the job market. Financial constraints and limited access to technology can restrict clients’ ability to leverage tools, especially in more rural communities where internet access is not guaranteed.
As CDPs, we can proactively identify these inequities and advocate for more inclusive practices, such as funding for technology, training on digital tools, and policy changes within agencies and governments. This helps to ensure AI-powered career services and employment opportunities are accessible and equitable and that all clients have the necessary support to navigate and engage with emerging technologies.
Strategies
The increasing use of AI in recruitment adds another layer of complexity. Automated hiring tools, job-matching algorithms and AI-driven career assessments are becoming more common. How do we ensure these tools support rather than disadvantage specific populations and help clients understand AI’s role in their job search?
In reviewing some of the strategies that organizations and agencies have been working on recently, including presentations at CERIC’s Cannexus25 conference, three main areas of focus emerge: client education, workshops and technology resources.
- Support client education: Organizations can develop AI literacy resources that explain AI in simple terms and cover elements such as opt-out options and privacy settings.
- Offer practical workshops: Tangible skills training might focus on tailoring resumes for ATS filters, maximizing AI tools for job search, etc.
- Provide technology resources: This could include organizations providing free internet access through their resource centre, helping clients set accounts for AI-powered tools, and advocating to local and provincial governments for funding or grants that can facilitate clients’ access to technology.
While some resources are available to support career practitioners’ ethical AI use, the landscape is evolving rapidly. Staying informed about these changes is crucial for CDPs to provide ethical, informed and client-centred support.
At its core, ethical practice in career development is about fostering trust. Every decision we make – whether about confidentiality, client access or AI – reinforces or erodes that trust. Our clients rely on us to act with integrity, advocate for their best interests, and ensure that our tools and systems serve them equitably. By committing to ongoing ethical reflection, continuous learning and practical client support, we uphold our professional standards and strengthen the relationships that form the foundation of meaningful career support.