The European Union will soon enact the world’s first comprehensive regulatory framework for artificial intelligence. Atlassian welcomes the EU AI Act (the Act). We believe thoughtful regulation can build trust and foster responsible development and deployment of emerging technologies like AI.
As we celebrate this important milestone, I want to share my reflections on Atlassian’s presence in Europe, our journey in developing a Responsible Technology program, and three promising elements in the Act that we believe are critical to its implementation.
Innovating for the future, hand-in-hand with Europe
Atlassian has a singular mission – to unleash the potential of every team – and we bring that mission to life every day across Europe. Roughly 120,000 of our 300,000+ customers are based in EMEA, primarily in Europe, and reflect the diversity of European industry, including companies like Audi, Cancer Research UK, Capgemini, Klarna, Air France-KLM, and Solarisbank.
But Europe isn’t just a market we sell into. In addition to powering collaboration at thousands of European organizations, Atlassians throughout Europe drive innovation in our products and services. The team that developed Jira Product Discovery, for example, is fully distributed but predominantly based in Europe. Plus, hundreds of Atlassians in customer-facing roles like technical support, sales, and community events are distributed across the continent.
Our principled approach to emerging technologies
Atlassian has taken significant steps in our AI journey over the past year. First, we released Atlassian Intelligence, which combines state-of-the-art models developed by third parties like OpenAI with powerful teamwork data captured inside the Atlassian platform. Through Atlassian Intelligence, we deliver time-saving capabilities like page summaries, service desk automation, and task prioritization. The experience is dynamic and highly contextual to our customers, while honoring our pledge around customer privacy and security.
We also published our Responsible Technology Principles, the framework we use internally to ensure we’re being thoughtful about our development and use of new technology. These principles were heavily informed by (and align with) a number of industry frameworks. But they’re also uniquely Atlassian, drawing upon our company values as well as our commitments to our customers, employees, and other stakeholders.
Last, in true Atlassian style, we shared our Responsible Tech Review Template and No-BS Guide to Responsible Tech Reviews publicly to help teams around the world translate principles into practice. We’ve incorporated these resources into Atlassian’s development workflows over the past year, guiding the development of every Atlassian Intelligence feature before we release them to customers. Now they can serve as a guide for any team shaping how AI is incorporated into products, services, and operations.
We hope that by sharing what we’ve learned so far, we will encourage conversation and collaboration across stakeholder communities. Responsible technology governance is impossible alone, and we welcome the Act’s regulatory guardrails for the development and deployment of AI.
Three critical concepts for implementation of the Act
Atlassian’s perspective on implementation of the Act focuses on three of its core elements that we feel are crucial for achieving its objectives.
Nuanced perspectives on risks stemming from AI
We applaud the way the Act recognizes the role governments can play in assessing risks that have implications for whole societies – and the recognition that not all risks are the same. For example, the Act highlights the systemic risks presented by some general-purpose AI models. At the same time, the Act identifies a fairly narrow set of prohibited and high-risk use cases, likely putting most AI systems in low-risk categories where they’ll be subject to a lighter regulatory touch.
Good-faith engagement and dialogue
AI is advancing at a dizzying pace, making it incredibly challenging for legislation to keep up. To address this, the Act lays out a broad set of regulatory tools that encourage proactive compliance by good faith actors. This includes codes of conduct, conformity assessment schemes, and other forms of governance to accomplish the Act’s goals.
Partnership around accountability
The Act kicks off a new era of accountability in the AI space. Internal initiatives like Atlassian’s are foundational to readiness for AI regulation, but they are just a start. It will also be important for us and other AI stakeholders to stay connected to implementation as it unfolds.
As such, we welcome the AI Pact and we look forward to contributing to its development. The AI Pact is a new initiative led by the European Commission to promote compliance with the Act through voluntary engagement. We believe that programs like the AI Pact represent opportunities for organizations to renew their commitments to responsible tech practices and prepare for new obligations and expectations.
Similar themes run through our own Responsible Tech program. We’ve developed a risk-based assessment process for our AI products and use cases, we’ve created tools to enable good-faith dialogue about AI development and deployment of AI systems, and we’re preparing for a new era of accountability for AI.
Impossible alone, but possible together
Atlassian has leaned into voluntary measures governing our AI products and internal use of AI. We’ve shared that work publicly through our Responsible Tech initiative to give other companies a leg up in implementing the Act’s requirements. And now we’re keen to partner with our European customers, EU Member States and institutions, and the broader AI stakeholder community to support the Act. Together, we can build a future where AI is as ethical and equitable as it is exciting.