AI policy
Our values-driven approach to credit union marketing and website design in the age of AI.
Guiding Philosophy
At PixelSpoke, we recognize that however we might feel about AI, it is here to stay. It has profound implications for both the digital marketing industry and the credit union movement, most of which have yet to be fully determined. As such, it is not something that we can ignore or treat casually.
Here are the three core tenets of our guiding philosophy:
Acknowledging Harm
We fully acknowledge that AI has vast potential to cause harm by deepening existing inequities, fomenting social and political divisions, enabling fraud and other malicious activities, spreading disinformation, and substantially adding to our carbon footprint.
Proceeding with Caution
We approach AI with a healthy sense of skepticism. We view it first and foremost as a junior collaborator—a tool to support brainstorming, drafting, analyzing, and completing repetitive tasks. We may use it to get unstuck or as a starting point when we just don’t know where to begin.
Taking Accountability
We are committed to human-led, AI-assisted design and development. Accountability rests with our team. However we employ AI, we will rely on our team members to be judicious about inputs, identify biases, verify sources, review copy and code, and exercise other human oversight as needed.
Our values-informed approach
Our company values guide us in everything we do. Here's how they inform our approach to employing AI in our daily work.
We commit to using AI in a way that helps us improve our internal processes while protecting company and client data, minimizing bias, respecting intellectual property, upholding ethical design principles, and minimizing environmental harm.
If a problem arises, AI is never an acceptable scapegoat. PixelSpoke is ultimately responsible for the final work product, even when AI is used.
We also understand that we’re still in an exploratory phase and when issues do surface, we will work collaboratively to find solutions.
All AI-generated output is reviewed, understood, and edited to meet our high-quality standards. We also verify that AI content is backed by credible sources.
If any problems are identified, we encourage team members to share their learnings so we can avoid similar missteps moving forward.
We strive to use AI in a way that enables us to focus more on the tasks that we enjoy.
Our goal is for team members to continue to feel a sense of ownership over their work, learn new things, and feel intellectually challenged.
As we continue to explore and experiment with AI, we encourage team members to be vigilant about identifying where AI may not actually be helping us.
When we default to AI, we sometimes spend more time refining prompts and editing the outputs than we would have spent simply doing the original task.
We are transparent with clients that we use AI to assist us with some tasks. We are also diligent about communicating our commitment to human oversight.
Uses of AI That We Don't Support
Using AI for any illegal, unethical, or malicious purpose.
Fully automating communications to clients, team members, and partners or other stakeholders without human review.
Generating content that is discriminatory, harassing, defamatory, inaccurate, or misleading.
Automating decision-making in areas that impact fairness, access, or equity without any human oversight.
Inputting any client or company confidential information into public AI models such as NPPI data (non-public personally identifiable information), medical records, account information, or any other private member data.
See our Impact Report
As a B Corp and employee-owned cooperative, impact is at the heart of everything we do.