In the case of Bégin-Létourneau v. Syndicat des spécialistes et professionnels d’Hydro-Québec[1], an employee filed a complaint against her union under section 47.2 of the Labor Code, alleging that she had been misrepresented by it.
She essentially accused the union of violating her right to speak at a union meeting.
In support of her allegations, she told the Labor Relations Board (the “LRB”) that the union’s duty to represent her “is not limited to grievances or the collective agreement,” but that ” it also extends to the defense of internal democratic rights when a union acts in an abusive, arbitrary, or discriminatory manner toward a member,” citing two case law references.
However, it appears that the cases to which the complainant refers do not exist “and were probably invented by artificial intelligence,” according to the TAT. In any case, the TAT argues, they were clearly not verified by the complainant before being submitted to the Tribunal.
The TAT therefore obviously disregards these references and dismisses the complaint, concluding in particular that the alleged actions fall within the internal jurisdiction of the union and are therefore not covered by section 47.2 of the Code.
What can we learn from this?
This case highlights, once again, the dangers of AI tools that can generate false or fictitious information. There are more and more similar cases before the courts, and we will continue to be confronted with this reality.
Most recently, in November 2025, the Court of Quebec issued the following comments on the risks of using AI for legal research in a case involving hidden defects where the buyer had submitted non-existent case law references. The Court ruled as follows:
“39. However, these four decisions are hallucinations generated by artificial intelligence: they simply do not exist. The neutral references mentioned (2012 QCCQ 7854[20], 2009 QCCQ 3249[21], and 2018 QCCQ 1649[22]) refer to other judgments unrelated to the sale of infertile animals.
40. The purpose of this comment is not to criticize the Buyer, who informed the Court that the list came from ChatGPT. Rather, the Court takes this opportunity to caution against the use of artificial intelligence for legal research purposes. In addition to providing inaccurate information, these tools may, as in this case, lead a litigant to believe that similar cases have been decided in their favor, which is misleading.
41. Ultimately, while artificial intelligence can be a useful tool, it cannot replace the rigor and reliability of official sources. Justice is based on evidence and applicable law, not on answers generated by an algorithm. A conversational artificial intelligence tool does not create case law and cannot, under any circumstances, serve as the basis for a judicial decision.“
Food for thought!
[1] 2025 QCTAT 5208
[2] Lessard c. Longueépée, 2025 QCCQ 8285