Timeline: June 2024 - September 2024
Role: UI/UX Designer
Project Type: Internship
Skills: Interaction Design, Financial Planning, Prototyping, Design Systems
The goal of a financial planner is to create financial plans and understand when the company has deviated from the projections. They use a tool called what-if analysis to run potential scenarios by varying revenues and costs and seeing how it affects overall performance. From this analysis, they are able to determine the risk associated with these discrepancies.
1. As a financial planner, I need to leverage Gen AI to quickly analyze data and understand the causes driving a discrepancy between a plan and a prediction so that I know what I need to explore in order to mitigate the discrepancy.
2. As a financial planner, I need to leverage Gen AI to create different scenarios and understand their impact so that I can recommend the best course of action to address a discrepancy to a plan.
The user I am designing for is a financial analyst, specifically at an automobile manufacturing corporation. It is important to note that she is an expert user, which means that she knows how to pull information from a page and requires less overall guidance.
I was able to break down the steps a financial planner takes into 5 key stages:
Discover: The user becomes aware of discrepancies.
Learn: The user understands causation behind a discrepancy.
Explore: The user works with data to determine potential solutions.
Take action & Monitor: The user works with stakeholders to act according to an insight.
From here, I mapped out the decision tree of a financial analyst to understand the key action points. This allowed me to simplify the user's interactions so I could create the optimal model for a financial analyst to learn and explore data.
I looked into over 15 products and their AI Assistants. I then summarized each AI experience and categorized them in order to be able to properly compare them to our use case and to each other.
Omnipresent: the product’s AI assistant is meant to be used throughout the application.
Contextual: the AI is only is revealed within a particular context of the application.
General audience: all users of the application would benefit from the AI assistant
Specific audience: the AI is directed towards a subset of users.
💡 Insight #1
Omnipresent AI assistants are generally in the form of chat bots because its uses are more general.
💡 Insight #2
AI assistants with specific use cases and specific audiences are in the form of input/output boxes.
💡 Insight #3
Prompt suggestions are displayed to help the user understand how the AI assistant is intended to be used.
💡 Insight #4
AI editing tools can directly insert generated content whereas more general AI tools cannot.
After my research, I was able to identify design principles that guided my design decisions as I began ideation:
The user guides the AI, AI doesn't lead or guide the user.
We want to be sure to earn the trust of our professional users by allowing them to be in control of what Gen AI produces for them. We only provide suggestions when we have extremely high confidence in its relevance.
The Gen AI output is bound to the context.
Prompts and queries are limited to what is relevant to a page. This is important so that the AI doesn’t have to distinguish the user's intent.
Provide data sources.
We provide backing data to the user for them to validate any analysis that Gen AI has provided so they can be sure that the response is factual. This gives them the option to verify insights themselves, and also increases trust since the AI is effectively citing its sources.
Our AI is built in, not bolted on.
I wanted to make sure the AI experience in my designs were differentiated and aided our user flow in a way that was truly beneficial to our user’s goals. I was careful to embed AI into the application, as opposed to throwing a chat bot in a side panel and expecting the user to seek it out themselves.
My first design exploration integrated a chat bot into the user flow because this was an extremely relevant pattern in my research.
In this exploration, the user wants to understand why the variance is high in this chart, so she asks the AI assistant which can be accessed globally. This opens a chat panel where the user can ask about the data on the page.
This concept was quickly discarded for a few reasons. The first is that according to my research, chat bots are most useful when their use cases are more general. However, my research suggested that for specific use cases, such as financial planning, input/output boxes are much more successful patterns. Another takeaway is that this chat bot doesn’t really feel connected to the page, which is a huge drawback for our use case where the user is constantly asking questions about the information on the page.
A question I had while designing was how to handle read-only pages versus pages we were modifying. The dashboard page is intended to be read-only, so I explored this option of asking questions about the content on the page in a page-level magic box, which then opened an insight panel for the AI generated response. From here, the user could expand the data in a dynamic tab and ask further questions.
The idea here was that the user would be able to see the full page on the left while interacting with the response in the right panel. However, we ultimately felt that this interaction didn’t fully justify the space the panel took.
Within our explorations of read-only pages versus pages intended to be modified, a key distinction I explored was page level magic boxes versus object level magic boxes.
Here, the idea was that the user wanted to run a what if analysis on this data set, which is an action that modifies the page, so they would be able to click into the data visualization and add scenarios right there in situ.
However, we realized that specifically for pages dedicated to running what-if predictions, the visualizations on the page would all be related to the same data. This made it counterintuitive for the user to click into specific data sets and ask object-level questions since the questions inherently were about all the data on the page.
This differs from a page-level magic box, which is demonstrated in this exploration.
Here, the user would interact with the data by dragging an empty canvas into the page, prompting the search box at the top of the page, which would then fill the empty canvas in response.
As I continued exploring the best patterns, I realized that specifically for pages dedicated to running what-if predictions, the visualizations on the page would all be related to the same data. This made it counterintuitive for the user to click into specific data sets and ask object-level questions since the questions inherently were about all the data on the page. From these explorations, we were able to narrow that pages dedicated to expanding 1 data set would benefit more from page-level magic boxes.
This internship required me to learn a lot about financial planning and its tools. Designing for an entirely new user group was a challenging but extremely rewarding experience. I had to pivot design ideas many times, but this experience ended up being extremely rewarding because I got closer to my final designs with every iteration. Another key takeaway was working within Oracle's Redwood Design System. It was challenging to fit my ideas within Redwood page templates. This summer has allowed me to improve as a storyteller and overall designer.
Given more time, these are the next steps I would've taken:
My greatest takeaway from this experience was designing for financial planners, a user group completely unknown to me. This forced me to deeply understand a new user journey so I could design an AI experience that was built in, not bolted on.