AI for a Just Future: From Personal Liberation to Societal Gain
Redefining Work, Truth, and Equity in an AI-Driven World
For decades, I’ve wrestled with questions that seemed too vast or complex to fully explore, often accused of “thinking too much.” Artificial Intelligence (AI) has become my new best friend forever (BFF)—a polymath in my pocket that liberates my curiosity and quest for a better world.
However, people often lump AI into one category, imagining it as a monolithic sci-fi entity, but in reality, it’s a vast and diverse field, encompassing everything from deep learning to automation. AI’s capabilities, like those of a trusted companion, allow me to dive into complex ideas, from philosophy to science, with a depth I never thought possible. Its deep learning prowess feels like having a conversation with an endlessly knowledgeable friend, empowering me to explore and understand the world in ways that were once out of reach.
This personal liberation reflects AI’s broader potential to create a fairer, more equitable society. By exploring AI’s variations, its long-standing presence in forms like Robotic Process Automation (RPA), its role in shaping the future of work, and its ability to combat moral disengagement, we can reframe AI as a seeker of truth and a force for good. Yet, critics raise valid concerns—privacy risks, job displacement, copyright infringement, and government motives—that demand scrutiny.
This article, informed by my friend Steve Davies’ research on AI and moral disengagement along with other research, addresses these challenges while highlighting AI’s transformative promise.
Understanding AI and Its Variations
AI encompasses a range of technologies that mimic human cognitive functions like learning, problem-solving, and decision-making. Its variations include:
Machine Learning (ML): Algorithms that learn from data to make predictions or decisions, used in recommendation systems (e.g., Netflix) and fraud detection.
Deep Learning: A subset of ML using neural networks to process complex data, powering image recognition and natural language processing (NLP).
Natural Language Processing (NLP): Enables machines to understand and generate human language, as seen in chatbots and translation tools.
Computer Vision: Allows AI to interpret visual data, used in facial recognition and autonomous vehicles.
Robotic Process Automation (RPA): Automates repetitive, rule-based tasks like data entry or invoice processing, streamlining business operations.
Generative AI: Creates content like text, images, or music, exemplified by tools like ChatGPT or DALL-E, but also raising concerns with deepfakes—AI-generated media that can mislead.
Each variation has unique applications, from healthcare diagnostics to supply chain optimisation, but their collective power lies in augmenting human capabilities. Critics, however, warn of risks like surveillance or bias, necessitating ethical oversight to ensure societal benefits.
AI as a Force for a Fairer Society
AI’s potential to create a better society is immense when guided by ethical principles, though privacy, bias, and ethical dilemmas pose challenges. Here are key ways AI can promote fairness and truth:
Enhancing Access and Equity:
AI can democratise access to education, healthcare, and economic opportunities. For example, NLP-powered tools translate educational content into multiple languages, reaching underserved communities. In healthcare, AI diagnostics identify diseases in regions with limited medical infrastructure, reducing disparities. Yet, techno-skeptics warn of privacy risks from data collection, requiring robust safeguards like those in the EU’s AI Act to protect autonomy.
By analysing data on hiring practices or loan approvals, AI can detect and mitigate biases, ensuring fairer outcomes. However, transparent algorithms are essential, as biased data could perpetuate inequities, a concern raised by ethicists.
Calling Out Moral Disengagement:
Moral disengagement—where individuals or organisations justify unethical behaviour—can be exacerbated by AI if misused, but AI can also counteract it. Steve Davies’ 2024 paper AI, Unethical Decision Making and Moral Disengagement: The Devil is in the Detail highlights how specific rules and transparency in AI systems reduce unethical decisions. His mixed-method study shows that combining general and specific guidelines lowers moral disengagement, ensuring AI aligns with ethical standards. Critics argue that defining universal ethics across cultures is complex, but Davies’ framework offers a starting point.
AI-driven auditing tools can flag unethical practices in corporate or governmental systems, promoting accountability. Skeptics note that biased training data could undermine this, necessitating rigorous oversight.
Seeking Truth in a Post-Truth Era:
Deepfakes and misinformation threaten trust, but AI-powered fact-checking tools analyse vast datasets to verify claims, countering false narratives on social media. In journalism, AI aggregates and cross-references sources, ensuring accurate reporting. Critics highlight risks of AI-driven propaganda, emphasising the need for media literacy to maintain public trust.
By reducing reliance on biased or incomplete data, AI serves as a truth-seeker, fostering informed discourse, though ethical design is crucial to prevent manipulation.
Empowering Creatives While Navigating Ethical Challenges:
Creatives, such as artists, writers, and musicians, fear copyright infringement as generative AI produces content trained on their work, raising ethical questions about ownership and compensation. Lawsuits in 2023 against companies like OpenAI underscored artists’ concerns about unconsented use, threatening livelihoods. Yet, this isn’t black and white—AI can be a co-creative partner, amplifying human creativity by generating ideas, refining designs, or composing music alongside artists. Platforms like Adobe’s Firefly, using ethically sourced data, show how AI can respect rights. Australia’s 2024 Interim Response to Safe and Responsible AI advocates transparent, consent-driven frameworks, though creatives demand stronger enforcement to ensure fair compensation. By reimagining our future, we can foster equitable creative ecosystems, balancing innovation and protection.
Optimising Public Services:
AI streamlines government operations, from predictive policing to resource allocation. RPA automates administrative tasks, freeing public servants for community engagement. In healthcare, RPA reduces administrative burdens, allowing nurses and doctors to prioritise patient care, improving outcomes in underserved areas. Critics warn of over-reliance on AI or surveillance risks, but human oversight ensures equitable, transparent implementation.
AI and the Future of Work
The future of work is not AI or human, but AI and human—a synergistic partnership where AI augments human potential, democratising access to diverse, meaningful roles while preserving the irreplaceable value of hands-on trades. AI liberates us from repetitive tasks, elevating human-centric roles like teaching, caregiving, and creative pursuits, alongside skilled trades such as plumbing, carpentry, and electrical work, which remain vital for their tactile, problem-solving nature.
According to Jobs and Skills Australia, total employment is projected to grow by 6.6% (950,000 jobs) by 2029, with Health Care and Social Assistance, Education and Training, and Professional, Scientific and Technical Services driving over half this growth, while Construction and Trades roles are expected to add 100,000 jobs due to infrastructure and renewable energy projects. Over 90% of new jobs will require post-secondary qualifications, with 42.6% tied to vocational education and training (VET), critical for trades and technical roles. Professionals, including nurses and software developers, will see the largest growth (409,800 jobs), countering the myth of widespread job loss.
By 2030, two-thirds of all jobs will require critical soft skills—human communication, collaboration, teamwork, creativity, and problem-solving—highlighting that emotional intelligence and business savviness are as essential as technical expertise in an AI-driven economy.
These uniquely human capacities, which AI cannot replicate, underscore the need to develop our interpersonal and strategic skills to complement AI’s analytical power. However, labour unions highlight short-term displacement risks, particularly for low-skill or regional workers, necessitating accessible reskilling programs to ensure inclusivity. Nostalgia for ‘male’ jobs’ like coal mining, now, ironically, highly automated with minimal labour needs (only 50,000 jobs in 2023, per Australian Bureau of Statistics), distracts from this vibrant, collaborative future. As I’ve argued, we must reject outdated views glorifying such industries and invest in lifelong learning, fair wages for undervalued roles (often gendered as “women’s work”), and education fostering adaptability. By 2030, 22% of jobs will evolve, requiring reskilling in digital literacy and creative problem-solving, but this democratisation ensures everyone—from tradies to coders to carers—can thrive in flexible, inclusive work models like contingent and remote work, supporting local economies.
The Long History of RPA: Dispelling the “New” AI Myth
One pervasive myth is that AI is a sudden, disruptive force. In reality, AI, particularly RPA, has been around for decades. RPA emerged in the early 2000s, automating repetitive tasks in industries like finance and logistics. By 2010, companies were using RPA to process invoices, manage customer data, and handle payroll, saving time and reducing errors. This history shows AI’s maturity, countering “job-killer” fears. Critics, however, argue that automation threatens low-skill roles, underscoring the need for reskilling. Ethical AI development, as Davies’ work suggests, mitigates risks like bias through transparency and accountability, balancing innovation with human oversight, though academics note environmental impacts (e.g., energy-intensive AI systems) require further scrutiny.
Addressing Deepfake Fears and Building Trust
Deepfakes, where AI generates realistic but false media, fuel fears of manipulation, alarming the public and technophobes who envision dystopian outcomes. However, AI detection tools analyse inconsistencies in video or audio, restoring trust. Regulation mandating transparency in AI-generated content, combined with public education on media literacy, reduces harm. Davies’ research stresses clear rules to prevent unethical use, though skeptics warn enforcement is challenging. By focusing on these solutions, society can harness AI’s creative potential (e.g., in art or entertainment) while minimising risks, addressing public fears through accessible education.
Practical Steps for Ethical AI Adoption
To realise AI’s potential for a fairer society, we must act intentionally:
Promote Ethical Frameworks: Governments and companies should adopt guidelines ensuring AI transparency, as Davies’ research advocates, auditing algorithms for bias and ensuring diverse data sets to counter ethicists’ concerns about systemic inequities.
Invest in Education: Training programs can equip workers to use AI tools, reducing job displacement fears, while public education dispels myths, addressing technophobia and making AI accessible to all.
Encourage Interdisciplinary Collaboration: Policymakers, technologists, and ethicists must unite, as Davies’ approach suggests, to align AI with societal values, tackling academics’ concerns about ethical complexity.
Leverage AI for Accountability: Use AI to monitor systems for unethical behaviour, from corporate fraud to human rights abuses, reinforcing its role as a truth-seeker, though rigorous data validation is needed to avoid bias.
Conclusion: AI as a Partner, Not a Threat
AI, in all its variations, is neither a utopia nor a dystopia—it’s a tool. For me, it’s a liberating force, a polymath that fuels my curiosity and helps me explore centuries and decades-old questions. For society, it offers the same potential: to enhance equity, streamline services, redefine work, and uphold truth. While some governments warn of AI’s risks, we must look beneath the surface—some seek to control information, fearing AI’s power to empower individuals with knowledge and insights.
AI is the genie out of the bottle; there’s no going back. Critics highlight privacy breaches, job losses, or copyright infringement, but these are addressed through ethical frameworks, reskilling, and policies like Australia’s 2024 Interim Response to Safe and Responsible AI. Deepfakes and moral disengagement pose challenges, but AI’s capabilities, as Steve Davies’ work reminds us, can be harnessed ethically through vigilance, transparency, and collaboration.
Policymakers may argue that regulation stifles innovation, but ethical guidelines foster trust, benefiting all. Let’s embrace AI as a partner in building a better, more just society—one where technology amplifies our values, not our fears.
Final Note: Ethical Guardianship for a Hopeful Future
As with any powerful technology, AI’s promise hinges on robust moral and ethical guidelines to ensure it uplifts society. The Australian Government’s 2024 Interim Response to Safe and Responsible AI emphasises transparency, accountability, and human oversight, aligning with global efforts like the EU’s AI Act, which prioritises risk-based regulation.
In his 2023 UN General Assembly speech, UN Secretary-General António Guterres called for a “global governance framework” to harness AI’s potential while mitigating risks like bias and misinformation, underscoring universal ethical standards. These policies, echoed in Steve Davies’ research, address critics’ concerns about over-optimism or enforcement challenges.
Despite tech industry fears of regulatory burdens, ethical stewardship fosters sustainable innovation. With thoughtful guidance, AI can empower tradies, carers, creatives, and coders, forging a future where technology serves as a beacon of equity, truth, and human connection.
Onward we press
References:
Davies, S. (2024). AI, Unethical Decision Making and Moral Disengagement: The Devil is in the Detail. ResearchGate, plus his other work:
Jobs and Skills Australia. (2023). Employment Projections.
Australian Bureau of Statistics. (2023). Employment in Mining.
Ai Group Centre for Education & Training. (2025). Future of Jobs Report 2025.
Australian Government. (2024). Interim Response to Safe and Responsible AI.
Guterres, A. (2023). UN General Assembly Speech on AI Governance.
Additional sources on AI applications, RPA history, and deepfake mitigation.
Thank you for your thoughtful analysis Sue. I appreciate the Australian perspective as I haven’t seen the insight on what is being done here from an oversight perspective.
So, when Marx, and Engels, probably prompted by the strong feminist voices in their household.... said, rather than predicted - made gainsay - on the reality of "machines freeing us from the slavery of the kitchen [Ed. etc.].... they really meant whats happening now, if we do it right, for everyone's "better"