Anna: Welcome back to the Noyack Expert series. I’m your host Anna, and today we’re joined by Ashley Faus, one of the sharpest voices at the intersection of AI, content, strategy, and modern marketing. She’s known for translating complex ideas into practical frameworks. Her writings appeared in Time, Forbes, Harvard Business Review, and The Journal of Brand Strategy. But all of these things that she talks about apply to more than just the marketing industry, and that’s what we’re here to talk about today. As AI reshapes how teams create, communicate, and make decisions, Ashley’s emerged as a leading thinker on what truly differentiates human work. She’s written extensively on the pitfalls of blindly adopting AI, the importance of being strategic about it, and how teams and individuals can use AI to augment their capability rather than replace everything. Ashley, thank you so much for being here today.
Ashley: Yeah, thanks for having me. This will be a fun conversation.
Anna: What got you into thinking about the AI space in the first place?
Ashley: So I think if you’re at all in tech or knowledge work, you work in an office, you live in the world, you cannot escape the AI conversation. For me, particularly in marketing, there’s been a lot of fear mongering saying AI is going to destroy all of the marketing jobs. It’s going to take all of our jobs. I just fundamentally don’t believe that’s true. I think we’ve seen a number of tools come into the market and shifts from the early days of the industrial revolution into the Internet. Yes, the work might change, the skills might change, but ultimately I think there is always a place for human ingenuity, human creativity, human problem solving, and humans to add value. So I think just the hype cycle ramping up so much, I was like, let’s dig in and see where we are on the actual adoption cycle and value proposition cycle.
Anna: That’s all what drove you to write your book.
Ashley: So the book actually came about because I had been talking about frameworks around how the audience journey is not linear. The linear funnel and the looping decision journey are not actually indicative of how people think or buy. The social media spectrum where everybody is just trying to shout their message louder instead of really having conversations, adding value, and building community. And then from a thought leadership perspective, how do you show that you have that expertise? How do you help your audience think and act in a new way? What is the difference between a subject matter expert, a thought leader, and an influencer? We’re seeing the rise of all of these terms. These frameworks predate AI, but in the age of AI, I think it’s actually more important that you show the humans behind the screen and acknowledge the way that humans trust. So it was kind of timely. The title was definitely timely, but the frameworks predate the AI craze.
Anna: I think the core something a lot of people often overlook is that AI is this new big scary, dramatic thing. Yes, it certainly is dramatic. It’s one of the biggest technological revelations that we’ve had, especially in the age of the Internet. But there have also been other points in history where there have been big technological revolutions, and a very similar reaction is shown to those revolutions throughout history as it has been to AI.
Ashley: Yes, and it’s funny because I think the adoption cycle is starting to catch up with the hype cycle. We’re starting to see people back away from AI being the end all, be all super secret thing to say realistically this is an excellent tool, but we have to wield it as experts. We have to learn how to use the tool. It’s not the best tool for every single thing. There are different AI tools to use for different use cases. It’s so funny. ChatGPT is so confident in its ability to generate images. The images from ChatGPT are terrible. Like literally one of them I asked it, I popped in a LinkedIn post and said why is this LinkedIn post performing so poorly? It was like, you need to turn it into a carousel, not a static image. Would you like me to make a carousel for you? I said sure. It said I have now made you this perfect carousel. You can download it. It is ready to go. I opened this thing up. It was terrible. There’s random fonts. It’s left justified and then not even centered on the next page. There’s this huge white space. It makes no sense. I said ChatGPT, this document is not visually appealing at all. Please fix this. It said I can do it. So it did another pass. It was objectively terrible. ChatGPT is not known for being the best at imagery, right? Something like Midjourney, Sore, Synthesia, Gamma, or Beautiful.ai. There are image specific AI tools. ChatGPT just constantly wants to give me an image. I’m like, have you gotten better? It has not gotten better. The images are objectively laughably terrible out of ChatGPT.
Anna: One of the things I think when you’re working in marketing, you have to know a lot about humans. There’s a very personal touch you have to have. Another industry that people don’t necessarily see this in is money and personal finance. All of these things seem technical, but there is actually a very human piece to that. Because there is that human core similar across both industries, I’d like to ask you a couple questions on what you think about AI and finances. Does that sound like a plan?
Ashley: Yeah, I think this will be good. It’s funny enough, speaking of the hype cycle and different shifts, there’s irrational exuberance, you know, a psychology trick or whatever name we talk about when we discuss finances, stocks, and investments. So there’s a variety of concepts that overlap. So yeah, let’s talk about it.
Anna: So the first thing I’m wondering based on what you’ve seen both in marketing and just being in touch with AI as it’s developed is where can AI make learning about money easier and where can it tend to get in the way?
Ashley: So one of the things that AI is really good at is if you ask it questions related to yourself. I think sometimes people struggle from a personal finance perspective because most of the content in that space is hypothetical. They’ll say let’s say you have $1000 a month and you’re going to spend 30% here, 30% there, and 30% there. Well, I have $1200 a month and I want to spend 40% here, 30% there, and 50% there, and I’m missing 10% somewhere else. It feels hard to adapt those budget templates to your own situation. That’s a place where AI can help. You can literally tell it: here’s how old I am, here’s my salary, here’s my goals, here’s what I’m trying to save for. I want to retire early or fund my kids’ 529 or travel, whatever it is. That’s great because you can get tailored answers. I think the place where it holds you back is AI is not magic. Coming in and asking which stock should I buy? It doesn’t know. It doesn’t know which stock you should buy, which stock you should sell. Anything where you’re trying to use it to game the market is not going to work. There’s no magic. It has access to the exact same information in terms of company filings, historical ratios, earnings, press releases about acquisitions or executives. It has access to the exact same information that you have. There is no magic. If you’re trying to use AI to game the market, I would not recommend that from a personal standpoint.
Anna: I think with a lot of what you just said comes prompting. There’s a lot with AI prompting that applies to finances, marketing, and many different aspects of business and life. What are some of the most useful tips that you’ve come across within your work with AI and marketing that you think could apply on a more broad level?
Ashley: So the biggest thing is there are a couple big things. One is giving it context. If you want it to play a different part, if you want to say you’re a financial advisor with 15 years of experience and you generally take a conservative approach to your portfolio allocation, that’s a very different mindset for AI to advise you than if you have 0 idea about finance and you’re looking to put stuff in, willing to take high risk and throw out a bunch of bets. Those are going to give you two very different answers. So either giving it the context of who you want it to behave as or giving it detailed context of your situation—I’m this age, this is how much money I have, these are my goals, this is the time horizon over which I’m trying to achieve those goals. If I’m trying to buy a house in the Bay Area in the next 12 months, that’s a very different context than if I’m trying to buy a house in the Midwest or lower cost of living states in the next 10 years. Those are very different situations. That’s the first thing. The second thing is making sure that you’re not injecting bias as you go. The AI will start to learn and think, I get the sense that you’re excited to buy a house. You really want to buy a house. I’m going to encourage you to buy a house. If you phrase it as I’m super excited to buy a house, I found the dream house. Can I afford to buy this house? That’s a very different question than given my finances, how likely is it that I can afford a mortgage of this rate with this interest if I were to buy over the next 12 months? Or advise me on what budget I should set to buy a house or advise me on where I could afford to buy a house in the next 12, 24, or 36 months. That would be a better way to phrase it. ChatGPT is going to be like, it might be tough, but you can do it. When you really can’t afford that. You make however much you make. The house is 30 times your income. That’s a lot. So some things to keep in mind in terms of how you prompt it. The other piece is one antidote to confirmation bias is to ask it to basically make the opposite argument. Say I definitely don’t want to buy this house. I don’t think I can afford it. Please make arguments for why I can or cannot afford it. Doing the opposite prompt will help you see if you’ve maybe asked the question in a biased way or if you’ve given it context you’re missing or something like that.
Anna: Certainly the context behind it and how you prompt it will spit out different things. Asking a couple of different ways in different chats and comparing them often can help you get down to what’s actually going on. What’s actually doable for you. That’s super key. I think it’s also important for people to know that especially when you’re using Claude, GPT, or Perplexity, these are not search engines. These are AI models. They’re broad AI models, not specific to any industry. They’re not specific to finance. They’re not going to understand all of the rules and regulations in the area you live or what makes most sense financially. What makes sense financially for you might not be what makes sense numbers-wise. These are things these models just aren’t equipped to handle, but you can navigate them with those different prompting methods as you mentioned.
Ashley: I just find that fascinating as a user, someone who likes researching data, watching this technological revolution as it’s happened. It’s really interesting how you can try to get around some of these faults through prompting. I think in the same way that people are skeptical of marketing, if I work for a brand and tell you that this product is the best product and it just happens to be my product, how objective am I, right? I think that skepticism of does it make sense? Does it evoke a strong emotion either way? If you look at it and ChatGPT says you can afford the dream house, or if ChatGPT says you’ll never be able to buy a house anywhere ever until you die, it’s like maybe step back and think about how it arrived at those answers. You’d naturally be skeptical of a marketer telling you their product is the best. You’d naturally be skeptical of a house seller telling you this is the perfect house for you and of course you can afford it. They just want to sell the house. So I think approaching it with that same skepticism and realizing it wants to give you the answer you’ve prompted it with, and being a little bit skeptical of those answers, treating it more like a conversation and checking in with curiosity and skepticism instead of just taking everything at face value, I think is the overall human aspect of this. It’s not finding the gotchas in the AI. It’s more use it directionally, use it to help you think and make decisions. Don’t use it like AI told me this, therefore it is true. Have some curiosity and skepticism. It’s not the sole end all, be all of any conversation.
Anna: So let’s go along with this hypothetical situation. This person over the past 5, 6, 7 years has used AI to kind of help guide them on their financial decisions, of course with actual education and maybe a financial advisor, but they’re using AI as a tool to help them along this journey. What are a couple signals maybe that AI is actually helping them in this journey to grow net worth instead of just being an impactful mechanism?
Ashley: Yeah, so this is a hard question. Net worth is a longitudinal measure and for most people it includes some combination of income, savings, investments—whether that’s real estate, stocks and bonds, IRAs, college funds. There’s a variety of things that could be. There’s obviously the expenses piece of it, and something could come out of nowhere. You could have medical expenses that wipe out your savings. That tanks your net worth. You could have a situation like 2008 with the mortgage crisis that wiped out significant real estate and stock savings. So I hesitate to say that AI helps you build net worth in that sense. I do think having it help you analyze the decisions you’re going to make and the decisions you have made to then improve your decision making is valuable. Let’s take real estate as an example. Real estate is strongly advised as an investment. Get a second property and get passive income by renting. If you do the work yourself on the property, sell it yourself, handle maintenance issues yourself, that’s considered a solid investment. I think that’s something where with ChatGPT you could prompt it to say I’m looking at these three different investment properties. Here’s information about the rental market, information about home prices, here’s the work I anticipate needing to do. Give it that information and ask like, consider the historical home prices in this area for multifamily units. Is that market growing? Is it stable? Help me think through that decision. Then each year say, here’s how much in taxes I paid, here’s maintenance I paid, here’s what I took in from a rental perspective. Was this a good investment? Should I buy a second investment property? That would be something where I think it could help, but I wouldn’t necessarily attribute that to AI helping you build net worth as much as helping you make good financial decisions given your history, situation, and specific market. It takes a lot of effort to research home trends and jobs and moving trends. There are some states where population is growing or some cities that are the fastest growing. OK, people are coming in because there’s a new company that opened a plant. So there will likely be more renters. Finding out stuff like that might be difficult or time consuming, and AI can help you streamline that research. But I struggle to say if you have an investment manager beating the market, then sure. But even with a financial advisor, their job is not necessarily to directly help you grow your net worth in that sense. It’s to help you make good financial decisions. But ultimately over time should grow your net worth. I get a little twitchy when I think, hey, AI helped me grow my net worth. I’m like, how did you mean by that?
Anna: I think that’s something anyone listening should pay attention to because we like to look for those signals that what we’re doing is right. We want to be assured that what we’re doing is on the right path, that yes, we’re using AI for this application and here is the proof it’s working. That’s not necessarily always the case, especially when you use it more as something in the back of your mind or something you go to with questions when really you’re making those choices and you’re making those actions, not necessarily the AI making all those decisions for you. That’s something ChatGPT, GPT, any AI model can’t do because you’re the one ultimately making the decisions. Even when you have an agentic AI set up for finances, which is something we’re working on over at NOYACK, you are the one setting your AI agent up. You still have a hand in what’s happening, even if that’s control over what your AI can control. That’s something you have to give yourself more credit for. It’s not just the AI.
Ashley: Yes, whether it’s from a marketing perspective, there are psychology things like recency bias, survivorship bias, confirmation bias. We have all of those things in the back of our heads as humans. We end up putting that into whatever we’re prompting from an AI perspective if we’re not careful. So I think relying on AI to make all decisions, I think even with agents, you should be checking in on what it’s doing because it learns and reinforces. I’ve made this bet numerous times and it seems to be paying off. So I’m just going to keep betting in this space. Potentially at some point that bet stops paying off, but if you’re not paying attention to it, you catch it too late. It should definitely be a pairing. It should not be full outsourcing.
Anna: So one of the big things that seems to ring true across all different industries is that it’s a tool and you need to be checking up with it. Even if it is an agentic model meant to do it for you, popping in every now and then to make sure that what you want it to do is actually still happening and still benefiting you, because your situation may have changed and you just have to make sure everything you have is updated to accommodate that change. Along with that idea of these messages being true across multiple industries, what’s something you’ve noticed the most as you’re currently working in marketing about AI that seems to be a universal truth, even in areas like finance that might seem completely unrelated?
Ashley: I’ve seen this a lot from a marketing tooling perspective. The idea that it’s just magic and hand wavy magic. There’s no magic. Someone is smart and builds a tool. A human is smart and learns how to use the tool. A human is smart and continues to optimize whatever is being fed into the tool and whatever’s coming out of the tool. I see this from a finance perspective too. There is always some hot new stock or hot new tactic. The Roth IRA, real estate, tech stocks, whatever the hot topic is. The reality is there is no magic. There’s just no magic behind the scenes. Marketers are humans. We have to learn. We have to keep our skills sharp. We have to learn new tools and optimize those tools. In finance, even the folks beating the market, Warren Buffett, right? The perfect example of someone who is not doing magic. He is very smart. He is taking very smart risks and those risks are paying off. But it’s not magic. He has principles that he follows. He has learned the math and the markets. He talks to other humans and figures out what’s happening. There’s no magic. He’s a guy with a firm with other humans making smart decisions. I honestly think that’s the biggest thing, and this is true in any industry. One of the big conversations about AI right now is this idea of taste or discernment. We don’t know what that is yet, and I’m like, well, we have figured this out in other subjective industries. We figured this out for food. There are some people—chefs or people who make recipes—who just make food that’s objectively better than the rest. There are people who have been making paintings, sculptures, or photographs for decades and their work always sells and is always considered moving, technically adept, good art. Music. How come some people do this? How come Madonna has been a top selling artist for 40 years? Whatever she’s doing, she has figured out how to make music that people like to buy. If you look at each space, there are rules that we have codified. Music theory codifies the rules of good music. Once you’re at an expert level, you can break those rules in a way that is innovative or avant-garde or unique and that attracts people. We’ve seen that we can define what taste and discernment is in subjective industries. We’ve seen that in many cases we can codify those rules—whatever makes it good—we can actually codify that. We’ve seen that one way someone outperforms the rest of the market is to break the rules effectively. It’s not just doing the opposite thing. It’s a very nuanced breaking of the rules and you have to be an expert to do that. So I think we’re going to see similar evolution with AI. The other thing that’s interesting because AI cuts across problem spaces, solution spaces, industries, and crafts, it’s not as simple as using the art rules or music rules. I think it’s the same thing in finance. There are people who follow the Warren Buffett rules of investing. They are not Warren Buffett even though they follow his rules. There’s another class of investors that think he’s too conservative and go the opposite way. Some are making a lot of money, some are not. So I think this sense of taste, breaking the rules, and realizing there is no magic is true of multiple tools, multiple fields, and multiple people. There is a lot of nuance and variance across all these topics, and I think that’s universally true. We see it in finance as well.
Anna: One of the things with this conversation, we’re talking about different uses of various tools, platforms, and strategies. Especially with AI, one thing that always pops into my mind is ethics. I know that’s something you’ve talked about and written about. One angle I want to explore from the finance industry is: should there be a line between what AI retains to help make better decisions and what it’s retaining that it probably shouldn’t? What it’s retaining that could prove harmful for the person using it? What it’s retaining that maybe shouldn’t be used in other use cases? Should it retain information one person is asking and use what it learned in other situations? Should there be concrete lines as to where that is, or is it far more blurry and case by case, platform by platform, person by person? Is this something companies should decide? Is this something users should decide what they’re comfortable with?
Ashley: I think we probably need to have ethicists, philosophers, and lawyers get in the room to have this conversation. My personal answer, I think if we go to first principles, that’s a very difficult question to answer from a technology perspective, legal perspective, and moral, ethics, or philosophy perspective. For my personal answer, I actually think that the crux of it is transparency and control. If you know this data is being used to train broader models or we retain this information in memory for a certain amount of time, here’s how we use this data, and if you disagree with how they’re using it, you have the ability to opt out or opt in. Say yes, you can use it to train other models but only for up to 30 days, or yes, you can retain this information but only in the context of my chats for this certain amount of time. I think that’s the crux of it—transparency and control. What’s difficult right now, particularly from a legal perspective, is that the law has not caught up to the technology.
Anna: With that I think is the real struggle. The law has not caught up. It’s quite fragmented. There are a number of lawsuits about copyright. From a data privacy and data security standpoint, finance is particularly interesting because it’s a regulated industry. From a business perspective you’re definitely required to keep financial records for at least seven years. So let’s say you have an accountant asking these model questions to do taxes for the business. Do they need to retain those chats as part of their records for government compliance? I don’t know the answer to that, but that’s a valid question. From a business perspective, a lot of businesses are basically saying don’t put company related information into these models because we don’t know legally about privacy and security. Even from a personal finance perspective, insider trading is a perfect example. If you work for a public company or a bank or financial institution, there are certain trading windows that are closed so it’s harder for you to insider trade. You don’t get material information inside the open trading window so you can’t insider trade. But if you’re sitting around trying to get ChatGPT to answer all these specific questions, could that potentially constitute insider trading? I don’t know. The law might figure that out five years later. So I think the crux of the issue is around transparency and control. I think we’re going to see more laws, privacy, information security controls come onto the market over the next couple of years as AI adoption becomes ubiquitous. But right now we’re still in the Wild West in terms of copyright, data retention, data sharing. Most companies are bearish on that and basically say you can only use these approved vendors that we’ve evaluated. We know they’re not sharing information to train other models. They’re not mixing our data. But on a personal level, that’s something that’s still very blurry. I don’t think it should be blurry, but it is.
Anna: I think one of the things because you said it perfectly, it is the Wild West out there. The law often takes a while to catch up to things. This is something anyone who’s taken history class knows. There’s always some unit where the law hadn’t caught up to situations and because of that, XY and Z ensued. Right now it feels especially scary because it’s with data and technology, privacy, and a lot of things people don’t understand. One of the things working with AI and looking at finance, especially personal finance, we talk a lot about personal finance and financial education and understanding your situation. I think so much of that applies to all of these questions about AI as well. You just need to learn. You need to educate yourself on the platforms you’re using to the extent that you can and understand what you are comfortable with and go from there. It’s right now because it’s the Wild West. There’s so much going on. Understand what you are comfortable with and then you can figure out OK, what resources are out there that I can fit within what I would like to be doing.
Ashley: Yes, from a data sharing perspective, one thing I found helpful sometimes is to not say I am doing this or I have this situation and just say a person has this situation or you are and basically insert demographics. That makes me feel a little better. It can probably, I actually just did something with ChatGPT and it cited me. I was like, why did you cite Ashley? It was like well in the past where we’ve talked Ashley has been a central figure in our conversations. I’m like you’re not wrong, but you’re not right either. That does make me feel a little better sometimes, like not saying hey, it’s me and I have this thought. Instead saying let’s take a hypothetical situation. I do a lot of musical theater, so I’ll say you want to audition for a show and you have this voice type, not I’m auditioning for a show and I have this voice type. Help me choose an audition song.
Anna: One here’s another question, especially because I think this is a great question for you because you use AI every day in your work. You work with it, you write about it. What in your opinion are some of the major differences in using AI for a personal situation—whether that be financial or whatever else you might be asking AI about—versus how it’s used in more of a business context? What are some of the key differences? One thing, a lot of the work on AI is about it from a business perspective, how it’s going to change industries. How can one apply that to AI for personal use, whether that be financial or not?
Ashley: So a couple big things. A lot of companies have locked down the tools to only have access to certain spaces within the business, only for certain use cases, or only certain tools that users are allowed to test, because of the legal, privacy, and security issues. So that’s one big difference. From a business perspective, most of those systems are more closed, or they’re pulling information in a business context. From a personal perspective, the data is a lot more likely to be shared among models to train them. You may or may not be using the latest model. If you’re using a free version, the results might vary quite a bit versus most businesses using the paid latest and greatest version. That’s one thing to keep in mind in terms of speed, accuracy, completeness of answers. Then I think the other big thing is testing a lot of different tools. From a personal perspective you may not have access to those tools from a business perspective, but you can access them personally in all cases. Look at whatever your business policy is and do not put business data into other tools if your business says not to. But I think it has been very helpful for me to test a bunch of different tools and literally copy and paste the same prompt into each and see what they put out. In some cases it’s like valid cross validating the information. I’ve heard this thing over here that ChatGPT won’t. Like Claude, I’ve heard this. What would you say about that? How do you feel about that? Or ChatGPT, I heard this thing that’s literally from Claude. What would you say about that? Comparing those sensors where you’re able to, then I think the other big piece is I have found from the use cases from a personal perspective that it’s more subjective than from a business perspective. If I’m using Robo, Atlassian’s AI tool that includes chat, search, and agentic workflows, it’s connected into what we call the teamwork graph, basically the big data lake connected across all our products. The queries I put into Robo are very specific to my work at Atlassian and they return very specific results for my Atlassian context. So I tend to ask it much more factual or interrogating questions versus ChatGPT or Claude. I tend to ask it more subjective or conversational questions. So I think that’s another big thing. It doesn’t make sense for me to ask Robo what should I do from an audition perspective. It makes perfect sense for me to ask ChatGPT that and then say, you gave me these songs. What do you think about this other song? And it’s like, I wouldn’t recommend that, but why do you want that song? I can say it’s because for these nerdy singer reasons. And it’s like, that’s very helpful. Well, in that case, I recommend this other thing. So I think being a little bit willing to go down the rabbit hole and see where it takes you from a personal perspective also helps you hone your own voice and the way you ask questions. It also helps you spot biases. I’ve started to see, we’re asking questions and I don’t think I’m asking a leading question, but the answer it gives me reflects myself back to me. I’m like, why did you give me that answer? I thought I asked it in an unbiased way. I think that’s a little bit more difficult in a business setting. But using it personally and seeing that mirror held up to you accidentally in a personal setting also sharpens the way you prompt and query and build from a business perspective. So I think taking the learnings between the two is really helpful.
Anna: There’s two big things I get out of that. The first one is we’ve mentioned this a couple times throughout this conversation—cross referencing and checking across different models, especially in cases when you’re using Claude, GPT, or other broad large language models that do everything. Seeing what the different ones are doing and comparing that to what you’re doing is big. Also, if you’re going to use AI for your specific work related tasks, have that work related AI. For other people, let’s say you want to work with your finances with AI more closely, find an AI more adept at finances. Find something more specific to finances. We’re building some of those right now, which has been really cool. But a lot of times when you think, why would I go find a financial specific AI when I just have this one that can do everything? That’s kind of the problem. Claude or Grok can do everything and it’s pulling from so many more sources. In some cases having something more specified to what you’re looking for will help you get more accurate responses to what you’re looking for. Yes, cross verification is always great, but it’s a little easier to start closer to where you want to go.
Ashley: Yes, I mean, this is the same thing we talk about in terms of building agents. You need to build the agent to take care of a specific task. You don’t build the agent to take care of every task. You might need several agents, each with their own task. I think about this too. We were talking earlier about ChatGPT with images being terrible versus Gamma, Beautiful.ai, or even the AI built into Canvas, Figma. Those are graphic design specific tools and they turn out much better things. It’s the same thing. Even I think sometimes too, gut checking it against the old school or analog solution. Budget templates are a perfect example. You can get crazy Excel spreadsheets built with all these formulas, tabs, pivot tables. You can input this and that and get crazy with it. But if all you’re trying to do is figure out a basic budget for yourself, you don’t need the craziest pivot table because you’re a solopreneur with a hedge fund and five mortgages. That’s way more than the average college student just trying to figure out what their standard budget is. So if you’re going to use AI for that, you have models that are really great for super ultra specific situations, but then you have those agents that are better for simpler personal finance questions.
Anna: Yes, telling it like, hey, I’m a college student. I have rent, utilities, and food with my roommates. How much can I afford to spend? Or how much do you recommend putting into a savings account? That’s a very different question than I’m 62 and I have a net worth of a couple million and I’m looking to retire soon. I’m about to get Social Security of this amount. I have passive income of this amount. Recommend where I should live, where I should retire to, where there are the best tax advantages given my financial situation. There are different tools for both people in those situations, and it seems like with most other things even before AI existed, it’s always about finding the right tools for the task at hand. Even if it’s just a financial question, that’s great. We’ve narrowed it in a little bit. But those two examples you gave are two very different financial questions which could require two very different tools.
Ashley: Well, before we sign off for the day, is there anything else you want to talk about? Any of the many things we’ve discussed that we just didn’t get to, or something that’s been nagging at the front of your mind?
Anna: I think we’ve talked about it quite a bit, but I will reiterate: tools are excellent, but do not delegate all of your human smarts, work, problem solving, and creativity to the tools. Humans use tools. You do not serve the tool. The tool serves you. I think that’s the biggest call out with all of these questions. It’s great to talk about technology, tools, automations, workflows, and all the things. But at the end of the day, keeping that human benefit and partnership central is really key.
Anna: Well, thank you so much, Ashley, for being here and having this lovely conversation about AI, marketing, finances, and industries generally. It was great to have you, and thank you for all of our viewers for watching this episode or any clips you might have found wandering around the internet. We really appreciate it. If you want to learn more about NOYACK AI building, you can head over to our website at wearenoyack.com and get a glimpse of what’s going on there. You can also access all of our financial education resources on that site as well. Ashley, is there anywhere anyone interested in your work might go to learn more?
Ashley: The best place to connect is on LinkedIn. So just Ashley Faus, and I share primarily about marketing, but some about leadership, management, and how to deal with this crazy age of AI as well.
Anna: Well, thank you again, and we will see our viewers in the next one. Thank you.


