accounting arc: exploring ai’s role in modern accounting

stringent ethical oversight and proactive regulation are needed as ai becomes more integrated into daily life.

accounting arc 
with donny shimamoto and liz mason
center for accounting transformation

in the latest episode of the accounting arc podcast, hosts liz mason, cpa, and donny shimamoto, cpa/citp, cgma, explore the evolving role of artificial intelligence (ai) in the accounting profession, focusing on both its transformative potential and the ethical challenges it presents.

more: accounting arc: unraveling the collapse of silicon valley bank | accounting arc: introducing accounting, reaction, comedy | harper & co. cpas: the perspective of a non-accountant is imperativemenlo innovations: improve office culture by overhauling internal reviews | dustin wheeler: for serious cas success, hire tech teams | chase birky: overcoming paralysis by analysis |

goprocpa.com exclusively for pro members. log in here or 2022世界杯足球排名 today.

their conversation sheds light on the history of ai in accounting, the distinction between different types of ai, and the implications for professional ethics and regulatory standards.

“generative ai is different than machine learning, but machine learning has been in the accounting space for a long time,” says mason, the ceo and founder of high rock accounting. “our code of ethics should be considered at every point in time, and we need to understand the tech, review it, and manage it appropriately, making sure that we’re ahead of the regulatory environment and we’re advising on what it looks like for the future of our profession.”

historical context and evolution of ai in accounting
mason and shimamoto begin by discussing the longstanding presence of ai in accounting, tracing back to simpler forms of automation like automated bank feed classifications in quickbooks online (qbo) and expense categorization in expensify.

“we’ve actually had ai operating in accounting for quite a while now,” says shimamoto, the founder and managing director of intraprisetechknowlogies and the founder and inspiration architect for the center for accounting transformation. “so, i feel like it’s actually a misnomer that people go, ‘oh, it’s this new thing that’s invading our industry.’”

shimamoto explains that these early applications laid the groundwork for more advanced ai technologies now entering the field, noting that ai has been an integral part of the background processes for years, evolving from basic machine learning algorithms to more sophisticated generative ai.

differentiation between ai technologies
part of their discussion centers on distinguishing between generative ai and other ai forms.

mason clarifies that generative ai can train on large data sets and create new, original data, marking a significant shift from traditional machine learning, which primarily predicts outcomes based on historical data. this capability introduces opportunities for innovation and risks, particularly concerning data integrity and the creation of fictitious information, as generative ai can “make up” data when insufficient inputs are available.

ethical implications and biases
the conversation pivots to the ethical use of ai, a central concern given the technology’s potential to embed and amplify societal biases. mason and shimamoto ponder the impact of biased data on ai outputs, highlighting the risk of perpetuating or even exacerbating existing social biases through unfiltered ai applications. they discuss real-world consequences, such as discriminatory practices inadvertently supported by ai systems in hiring or customer service, underscoring the importance of critical oversight and bias management in ai implementations.

regulatory concerns and data privacy
both hosts delve into the complexities of regulation and privacy, particularly the challenges of keeping pace with rapid technological advancements. they examine scenarios where ai might intersect with regulatory issues, such as the unauthorized sharing of tax information or data misuse that could violate privacy laws. the discussion covers the responsibilities of professionals to ensure compliance with existing regulations and ethical standards, stressing the need for proactive governance and clear guidelines on ai usage within firms.

practical applications and future outlook
throughout the episode, mason and shimamoto discuss practical examples of ai in accounting tools like microsoft’s copilot and dynamics are discussed. mason shares her experiences with these tools, offering insights into their real-world applications and learning curves. these examples illustrate ai’s potential to streamline tasks and improve efficiency but also highlight the care needed to ensure outputs are accurate and appropriate.

“think of ai like a junior staff member,” shimamoto says. “it can do a lot of the grunt work like drafting emails or doing initial research, but you still need to apply your judgment to it. does it make sense? you need to go and verify the same way you would ask a staff member to show you where they got their information from and what authoritative source they are using.”


key takeaways

  1. ai, particularly generative ai, is increasingly integrated into accounting practices, from basic data processing to complex decision-making tools. however, this integration requires rigorous oversight to ensure the outputs are reliable and ethically sound.
  2. accounting professionals must understand ai technologies’ capabilities and limitations to implement them effectively. this includes recognizing when ai will likely introduce biases or errors and how to mitigate these risks.
  3. as ai technologies evolve, so must the regulatory frameworks that govern their use. professionals are urged to stay ahead of regulatory changes and understand how to apply ai in compliance with national and international standards.
  4. ongoing education on ai technologies and their implications is crucial for all levels of accounting professionals. firms should prioritize training to ensure their staff can use ai tools effectively and ethically.
  5. the ethical management of ai requires proactive strategies to ensure data privacy, correct biases, and maintain the integrity of professional services. this involves setting clear policies on data use, understanding the source and quality of the data feeding into ai systems, and ensuring transparency in ai-driven decisions.

transcript
(transcripts are made available as soon as possible. they are not fully edited for grammar or spelling.)

liz mason  00:04
we’re back today to talk about artificial intelligence, which has been, you know, one of those big buzzwords hitting every industry pretty hard. i think since chat, gbt four came out, everyone’s been kind of playing with it from the administrative side thinking about what ai can actually do from a narrative perspective and, you know, interpreting and helping with, like human language, which is fascinating the way that it’s developed. but what are the implications when it comes to finance and accounting? because that’s really what we care about, right? how these different texts affect our industry. so donnie, what are some of the like biggest things you’ve seen of ai influence on our industry? 

donny shimamoto  00:43
the you know, the buzz, like you said, right now is around all the generative ai with the text and language and everything. but what i think what i think people don’t realize is that we’ve actually had a operating within accounting for quite a while now, i mean, go all the way back to qbo. and get in for to help with the classification on the bank feeds. that was was that 10 years ago, already, at this point, i feel like we’ve had that for a long time. and then all of the bill reading and expensify, no hazard seat reading, which is why they can read that heading to figure out the tip and the totals. so i feel like it’s kind of a misnomer that people go, oh, it’s this new thing that’s invading our industry. and it’s like, actually, the vendors have it all in the background.  

liz mason  01:31
yeah, i also agree with you, i think it’s really fascinating how, you know, people in accounting and finance, particularly people that got our degrees, like us more than maybe 10 years ago, don’t necessarily know the different pieces that go into a reit. and so definitions are really important as well. and you brought up a term generative ai, so generative ai, right, it’s the type of ai that can train on a big data set, and then create new data. and that’s kind of where you know ai has gone to, and what we’ll see more investment into versus the type of ai that’s already embedded in our, in our accounting app. so machine learning is a type of ai, right. and machine learning is similar to generative ai in that it is trained on a big data set, you know, they go through, you know, hundreds of 1000s, if not millions of transactions and teach it through algorithms, what to predict should be next, right, but it’s all done on averages, and the user in that system, versus generative ai, which is kind of still a black box, even to data scientists, right. and so, you know, the algorithms that they build, create this ability to generate new content and new ideas, which is more than just numbers, right? it’s not saying like, one specific code you have 100 to pick from, and then you pick a gl code, that’s more machine learning type algorithms versus, hey, i’m gonna give you a prompt, that’s narrative based, and you’re going to generate a response where we’ve seen so many interesting things come out of some of them, actual reality, some of them completely made up, right. and i think it’s really fascinating to like, differentiate, and historically understand where this has come from. but we’ve had programs like mind bridge in accounting for a while. so you know, how does that work? 

donny shimamoto  03:23
well, even with even with those, i feel like if something like mine bridge, you know, to me, it’s it’s another, it’s just another tool. so like excel or any of these others, when you look at what it’s doing, it’s helping to automate what we already have defined as essentially a rules based test, showing the differences that it’s taking the results of that aggregating that and learning, as you’re working with it more to really pull everything there. so there’s still this whole concept of learning, instead of machine learning. and when we look at the generative ai and things, it’s more of that deep learning are those neural network type of entities rather than just what we’re really working with. but they all kind of work together. and that’s the other thing. i think people don’t realize that this is like we’re talking about different types of ai. and they all kind of work together and so on, sometimes long as well. right? 

liz mason  04:14
so that type of ai is more pattern recognition in statistics. so again, that’s back on the machine learning side where you give it a statistical dataset, and you say, hey, identify outliers. and instead of like an auditor that can only do statistical testing, that program is able to go through transaction level, and go through all of the transactions to identify outliers. and so it’s really truly not, you know, generative ai where it can generate new information. it’s historical machine learning type algorithms where it gets better the more data sets that it sees because it understands statistically speaking, what a bigger data set is. and if you think about it from a math perspective, the more data you have, the more understanding you have of the out pliers and can really identify them. right? and that’s all it’s doing is identifying outliers and pulling them in for you to be a better auditor. right? so like you said, it’s just another tool. and there’s a few other tools, you know, that have been trying to integrate ai into our every day, like microsoft and copilot, have you played with copilot yet? no, 

donny shimamoto  05:21
i’m kind of waiting for because i know it’s the general the general preview or whatever it is general release preview is coming up. i believe it’s next month, at the end of next month. but everything that i’m hearing is sounds really great. and this is the stuff that i think makes more sense for us and looking at accounting as a whole. because it’s something that is built into the tools like we started with xero and, and qbo. it’s built into the tool for us to be able to utilize it. and that’s actually the way that i think we’re gonna see a lot of what we do in it, it’s not going to be us going and creating new apps or creating new solutions. so much as using the ai that’s already built in or incorporated into as an add on maybe like this copilot essentially is an add on to the office suite will leverage these tools that help us achieve specific things like i heard copilot is really good at writing excel formulas. 

liz mason  06:17
yeah, it’s so it’s really fascinating. the way that microsoft did that, right? because they’re they’re building it on the same type of language learning models and the generative ai. but they’re also releasing it in different pieces. and so they’re building like a comprehensive copilot, which is, you know, a clever name, to build into all of the tools. so one of the things is we use dynamics as our crm, so i’ve had access to sales copilot for a month now maybe a little longer. and it like, anytime there’s an email that has any indication that it’s a sales email, or a prospect or a client, even, it’ll give me a prompt and say, would you like me to generate a response for you? and i say, yes, please automate my responses. but sometimes it makes sense. sometimes it’s not anything that i would ever use. but it is learning the way that i respond. so every time i edit it, it’s getting better and better and better at predicting what i’m going to say when i’m going to say it, and what my voice is. and so it’s really fascinating on you know, just like in my outlook, playing with that from a communication perspective, but it’s also kind of scary, because when you think about, like, if i’m selling, or if i hire someone onto my team to sell, and they don’t know our products very well, and you have sales copilot go in there and say, yeah, we can absolutely do your, you know, crazy cost segue analysis for tax. we don’t do that we sub that out. and we’re very open with our clients that we’re not, you know, a specialty tax credit shop. but it’s scary, because it could easily, you know, answer the questions that the clients are coming in, or the prospects are coming in with incorrectly because it makes stuff up. right. and i think that that’s like a big question around, like, you know, how to use this type of software ethically. like, i don’t know, if you saw the case where the lawyer cited a case that doesn’t exist because he use chat gpt to write his case briefs. so like that is terrifying and also fascinating. like, the fact that it got through like the two backup chats up, gbt made up a legal case that a lawyer then stated in his argument in front of a court. like, what? 

donny shimamoto  08:48
well, and i think you’re bridging to what i think is even more important that we look at from a from a professional standpoint as an accounting professional, which is that these these technologies really are just tools. and i often tell people think think of it as like a junior, think of them as like a junior staff. so they can help do all this grunt work, like draft the email response? come up, come up, do some of the research, do the initial research for me, but you still need to come back. you need to apply your judgment to it doesn’t make sense. you need to go and verify the same way you would ask a staff member will show me where you got this from which what tax rags or what what part of the standard are, you know, are using authoritative source like all of this type of stuff, because all of this is taught a lot off the internet. and yeah, we all know we can completely trust the information that’s on the internet. 

liz mason  09:43
right? and i mean, you break up a bigger thought around that as well, which is you know, they’re training these datasets on massive amounts of information that has been filtered mostly by very cheap labor in other parts of the world, who don’t necessarily speak english. as their primary language and so when you start to like backtrack into understanding where the information is coming from, you know, it’s cleaned data from the internet, and cleaned is a person in there, cleaning it and identifying it in a way that the program can learn from it. and you know, it’s biased by and far the internet, it’s a biased place, and it embeds all of the biases of society. and, you know, i mean, i think both of us are in marginalized communities and thinking about the implications of, you know, bigoted information getting fed into that and multiplied because if you think about it, from like, a programming perspective, the full amount of information is huge, right. and so to train this, you have to truncate it, right. but when you truncate something, on the output, it ends up being multiplied to get back to the full size, right. so even if you know the bias is small by percentage, right, statistically, in the general dataset, it will come out the other side, much larger than the input. and so it magnifies any biases that it finds because it believes that to be truth. and that’s really terrifying. when you start thinking about that, from you know, generative ai and what happened from a content perspective and what app how people are using it, and what they need to do to filter to make sure that it’s not biased, 

donny shimamoto  11:29
completely, completely. and i think you raised another good point, which is, people say the technology is biased. and it’s not the technology that’s biased. it’s a reflection of society and what’s out there. so we have to remember that because part of what i, as i see that more, i feel like that’s where perhaps the two extremes were see, bei push a lot of content out, and there really are more of us that are care, say moderate, or in the middle, that or have perhaps the majority perspective, and we’re not in the minorities that are being the most vocal, that we need to make sure that we’re getting more content content out there. but there’s this whole bias. that’s the other one that i talk about a lot when i’m talking about i’ve spoken with, with regulators as well as other practitioners around, hey, we got to understand where these biases are, and how do we actually stop them because it can actually cause problems. the one example i always uses, if you’re using ai to help with recruiting analytics or performance analytics around staff. if you were to go and take a sample right now and say, let’s look at the leaders in our profession. and who’s up there at the top, it will tell you that it’s older white male, which so based upon that ai is gonna say, well, what do you who do you hire you hunger, hire younger, white male, and tell me how many issues that’s going to cause as you start to look at, not just the ai issues, but even things like eeo compliance and things. so we need to make sure that we put the right safeguards or controls in place to say, hey, ignore race, ignore gender, and ignore age, definitely, as we look at all this stuff, 

liz mason  13:11
ya know, it’s it’s fascinating to think about the implications, but it’s also terrifying when you really start to see when you see the biases come out in the technology directly. yeah, it becomes a concern of like, immense proportion, where we have very few people thinking about, you know, ethical uses of data, and, and how that can interfere in like larger issues, right. so my sister is actually a data scientist. so i’m lucky enough to get to talk to her a lot about this stuff. but one of the examples she used was like, the classic example of data misuse, and then think about an insurance company, right. so it is illegal for an insurance company to charge more based on race based on, you know, like, specific protected demographics, right. however, it is not illegal for an insurance company to change rates based on zip code. so if you think about that, when you look at large cities, people have specific races tend to live in specific neighborhoods. and they started early in like the 90s and early 2000s, really, using that data and getting the ability to hone in on who they could discriminate against without illegally discriminating against them. and now, whose responsibility is it to bring that up, right? is it the responsibility of the data scientist that they hired to do statistical analysis and tell them what trends they find in terms of car break ins in different neighborhoods? is it you know, the manager who reviewed that work, who’s making recommendations on rates to the higher levels? or is it the executive team to say, hey, we realize this is an issue and there’s a bias as point in this data that’s directly correlated to race, we should eliminate that being a point of, you know, how we do our business, but none of them did. so whose responsibility is it? who’s, who stands up? and i say it’s every individual. so if you’re the statistical analysis person, you need to raise a red flag and say, hey, this is directly correlated. and, you know, i see this as a biased use of data. and also, as a manager, say, no, we’re not going to do that. that doesn’t make sense. and as an executive say, absolutely not like that might make more money for the company. but that’s not what we’re only about, right? we need to be ethical in the way that we’re using the information. 

donny shimamoto  15:45
completely, completely agree. and a lot of that i think, ties back into effectively corporate governance, the need to ensure that we have these programs around dei to raise awareness, so that it like sexual harassment training, like acceptable use policies, and it that we’re providing the training out there to create the most for everyone or anyone that happens to make that connection, go wait, these things are connected. so even though it’s not a violation, it’s an indirect violation of some of these things, that we cannot stop that the second layer and i start to think then of the, what is it the three lines of defense model from the iia, the institute of internal auditors that says, okay, so finance and kind of your first one there, the second one is an audit. so it’s not, it actually needs to be very much informed of these types of things. and to do more of that critical thinking around is the way that this thing has been designed? are we actually addressing some of these issues? because it could actually, if it’s probably just a matter of a court case coming through that says, hey, this is indirectly causing this, and it’s in violation. and then now we’re going to set a new standard, and you don’t want to be the company that ends up having to deal with that. 

liz mason  17:04
yeah, but it’s also like, i think we need to embed it into our culture to think about these things and be all literate. i mean, the same way we need to be literate understanding the media right now and the bias that’s been in there, we also need to be literate, understanding how this technology works. and what we need to be looking for, as we use it leads me into, you know, another thought around our particular industry, right? like, you can go to any conference and hear somebody talking about how ai can just replace an auditor tomorrow, right? well, first off, it can’t because regulatorily, it cannot that that’s not going to happen, right. but foundationally, there is the technology available to do the type of work that auditors do the statistical analysis, the testing, the, you know, confirmations, even all of the work papers that we prepare, can be done and tech assisted. so where is it okay, for, you know, an auditor, like a senior associate to use these tools and where is it not? okay, like, how do you draw that line ethically? 

donny shimamoto  18:14
luckily, think actually is a very easy. and the answer is that they can always tools. right? the difference, though, is that the tool and again, i always tell people think of it as like a staff, the tool needs to be supervised, we need to look at the output that it’s providing, does it make sense and exercise our professional judgment as the senior, the manager, the partner, whoever it is to look at? does this make sense? is it doing what i’m doing? and as you indicated earlier, also looking at it and going, did we put the proper safeguards in place? are we using biased data is that is is what the working with the somehow and then going to create a perverse outcome. so i think it’s awareness, as you said, and it’s not restricting people from using this, but making sure that they understand the implications of actually using these types of tools. and that actually goes both to our domestic code of conduct, both within the ethics standards of the us in the us profession, as well as the international reserves. i was involved. i think it’s like three years ago, at the international level in some of the discussions that were being held with ai as bas, the international ethics standards board in looking at how the use of ai being how is that going to profession, they were very much focused in the audit realm because it the question was, can i just let ai do everything and then i just follow whatever it is, and then the answer definitely was no, i mean, while people think you can do that we all know audit and the interpretation of the standard financial reporting standards, really is just a call. it’s also why you can’t just use the automatic coding isn’t there with all the vendors that are coming through vendors that could have double uses or even items, if you’re able to get to the item level items could have multiple uses, always need to be flagged for human review. because if it starts to coat it wrong, that becomes just this ramping thing, because it’s learning from the wrong thing. and then it’s going to magnify that problem, as you said earlier. 

liz mason  20:18
and i would break it down like a little more granularly, as well in like a practical way. so we talked about this internally, hierarchy frequently, with the team, it’s like, where the data goes, right. so some of these tools are, you know, very secure, their privacy levels are absolutely, you know, ready for financial data and identified financial data. and some of them, all of that information goes into the dataset. so you have to understand when you’re using a tool where the data is going, is it just for purposes of this one analysis you’re doing? or is it going into the mass data set that will be used to draw on for future of anybody using that tool, and you know, there are ways to utilize identified data and anonymize it and put it into the mass general pool, which some companies do. and most of them unfortunately, don’t. so you have to really understand the privacy and security around where the data goes. so like, i told my client, i told my team that with client data, you can’t put our clients financial statements or identify data into something like a chat gpt and say, craft a cfo letter for the month, right? because it’s using identified data, right. and it’s using information that is private to our clients, and is not something that we’re authorized to share with the public effectively. but you can use a tool that’s like designed for that type of data privacy to do that type of work, right? and so differentiating and understanding underlying where the information is going is very important. like i think, using chat gbt as a baby auditor to say, hey, write a generic footnote about this issue. and just make it very general, right. and then you fill in the client information, and you fill in the numbers that are relevant for each year and the little, you know, sub table that has to go with all the footnotes, right? that’s something i think is totally reasonable. and it’s a tool to help because i guarantee you, when i was a navy auditor, i was googling stuff, right? it’s the same thing. if you’re like touching beauty, right, a basic footnote, but you have to verify it, right? so you have to go back to prior financial statements or other client financial statements and look at it and say, is this the right wording? is this what i’m trying to communicate? and do i need that right? you have to like think through it as the human in the room and review it as you would anything else. but i think that that’s a phenomenal use of the tool. whereas, you know, maybe not putting clients specific and identifying information into the tool is better. but i have seen uses, you know, where people have fed in like, full financial statements full and you know, into chat gpt and spit out some really great board reports. i personally wouldn’t do it, because i don’t know where that data is going. and i don’t want to be liable when my client comes back and says, hey, this was a, this was a data breach and a privacy issue. so now, you know, our information is somewhere else on the internet, like helps out financially, right. so you got to think through the implications of where the information is going. 

donny shimamoto  23:24
really. and i think with that, what we’re going to actually we’re already i’m already seeing it in some of the articles i’m reading is that, you know, we when we talked about this whole move to the cloud, there was the public cloud, the google the s four s3 and microsoft’s azure systems. and then there’s the private clouds. and this is where we’ve got right now most of us are working with public ai, which is then incorporating, potentially incorporating some of what you submit into the public knowledge base. and but i’m starting to read articles more about the private cloud. and i think it was boston consulting group, there was a couple posts about how they’re actually working with it to enable their own bolton’s and they’re starting to work with other large companies that have because you do need that volume of data, that this volume of data to teach it to incorporate ai to go back against what’s within your own corporate or your own businesses data store, and leverage that to generate something rather than the entire population. 

liz mason  24:28
yeah, it’s really, really interesting to how different companies have used their own data. and of course, the tools are better the more data you have. so unfortunately, as a small firm, we just don’t have the datasets to be able to build something ourselves that would be very useful without training it on someone else’s data. however, like the big four all have enough data of their own, to be able to train it just on their own data, and just create their own private cloud. you know, tool. and back, i think even 10 years ago, a few of them were working on ai around tax position, sorry. and like building tools that would review information from clients and put it through the database of all their other clients information and say, oh, we took this tax position, and it caused this effect, you should recommend that to a client, and then doing the support research and providing a tax research memo just straight from the software, which i think is fascinating. i’m not sure what ended up happening with that tool. but i do know that it was in development. and it was really interesting. but i think as these technologies get better and better, and the other side of it, is it used to take a lot longer to train them. so data scientists have gotten better at building algorithms that truncate training time, right? because you do have to give it the ability to process through massive amounts of information. and basically, they’ve done it by, you know, condensing algorithms, which again, magnifies bias and magnifies issues with the datasets, you have to have a really clean data set to use those truncated algorithms. and they’ve also done it through, you know, better processing speeds and the ability to utilize better technology at this point. but it’s only getting faster, and it’s only getting better. and so that means that, you know, instead of taking four years to build something like this, it could potentially take months in the future. and we’ll start seeing these types of things pop up in every sector in every direction. and we’ll have, you know, the micro companies, the macro companies, the first movers, the late movers, and at some point, we’ll have a consolidation into what our tools will actually be moving forward. and i hope that we have some regulatory change around use of tools. and around, you know, how these companies have to secure data similarly to what you know, europe is doing. 

donny shimamoto  26:49
well, and i think i think that’s you’re raising that you’ve said that several times about the privacy and the use of data. and so that’s something too, that i think anyone in our profession really needs to focus on is making sure that you actually are getting the right permissions. from the clients, especially with the tax data, there’s very specific requirements that the irs has around sharing of tax and for of tax information with a third party. and then also, especially if, if it’s ending up being used for marketing purposes. so tax preparers in particular need to be very, very cautious and ensure that they’re getting the right types of permission from the clients if they’re going to use it. this type of technology with your tax data…  

liz mason  27:31
absolutely, we have a generic release on our engagement letters, effectively saying we can use your data internally. and we can use it with you know, appropriate, you know, companies to have the appropriate privacy certifications. but we will never share it publicly. but we also for tax reasons. because we have tax data for an accounting firm. we have them sign off that they understand their tax data may be sent outside of the borders of the us. and it’s a really interesting gray area right now. because you know, they started the irs started that requirement, you had to sign off on individual tax data particularly being shipped overseas when there was you know, those big outsourcing shops happening from the bigger accounting firms, you know, 2030 years ago when they started. and they wanted taxpayers aware that, you know, that could happen. great, but fast forward. now the information is accessible, and it’s on servers that are not in the us. so what does that mean? right, like, what does it mean? do we have to get those releases? do we not? 

donny shimamoto  28:42
that one? definitely. yes, that one, talk that one. and for the people that aren’t familiar with it, it’s the 7216 disclosures. so with that requirement, everything that i’m seeing, i’ve talked both with the you know, insurance brokers and providers and i’ve talked with the aicpa about this one, so anytime it’s it’s being accessed, and definitely if it’s moving outside of domestic soil, that’s definitely going to be required. the interesting one, one of the very large accounting firms, actually, because there’s there’s a, there’s an exception that’s provided in there for what’s effectively incidental travel. so if you as a tax preparer happens to leave us soil and you’re on vacation, or you’re working remote for like a week or so, there’s an exception, provided that you don’t have to get that. but what happens when that becomes longer? and so it was interesting to see one of the large, i think they’re a top, they might be a top 20 firm kind of take this position of that that allowance was only for that a very specific clause or that whatever section it was in. and so they’re saying though, they’re not even allowing their staff to have this incidental access if they’re if they’re outside of the us. another very large firm actually shared that their position is they have all clients regardless, always sign the 7216. so that they don’t have to worry because they were like, well, if you only do some, then you have to remember all these sided and those didn’t? and how do i figure out my data? so i think the norm is going to be everyone just ends up having their clients. 

liz mason  30:20
yeah. and that’s, that’s what we decided to do. because we do have a remote team, most of our team is us based, but they travel frequently. and we ended up hiring a us citizen who then moved to bulgaria. and he’s on our tax team. so he’s gonna touch any number of clients at any point, and he does live there permanently. so, you know, i think it’s really fascinating when you start, like digging into the details and talking to the insurance companies about data usage. and i know that this is kind of a segue from ai, but i promise i’ll wrap it back in because like the regulatory environment, overarching all of it is not set up for the environment that we have, right? so at what point will accessing that tax data by an ai tool that hosted on a server in, you know, let’s say, indonesia, count, as accessing that data? right. and i think it’s coming soon, honestly. and i really think that it’s, i think that our regulations are not close to caught up to where ai is, i don’t think i don’t think our profession really understands holistically what the software capabilities are at this point. and i do think that there’s some scrappy startups pulling it in and using it and like super cool ways and building, building stuff that’ll kind of withstand the next iteration without, you know, complete re platforming of the technology. but i think that our regulations are not there yet. and i think that we need to be thoughtful about it, and ethical about it, and very upfront with our clients about use of these types of programs.  

donny shimamoto  31:59
that’s exactly where my mind went to. because i hear people say, well, i’m just gonna wait until the regulation comes. and this is one where our profession needs to stay ahead of that, because it’s going to take a lot, especially with the our us legislative process right now, that’s going to take quite a while. you can’t wait for that. and i always kind of like to turn it back to the biggest thing. and this usually ends up being public sentiment, what happens if your firm or your company is the one that’s in the news, because you allow the data to be used in a certain way, or you use the ai in this certain way? that’s the, if that reputational risk, it’s that what’s going to happen from a public outcry. so that’s, that’s where we need to take and look at this from well… 

liz mason  32:51
and i think it’s like, it goes even a level deeper, because, you know, i think you as a human are incredibly high level in your thought process. and you are very involved with international associations and national associations and state boards of accountancy and all the people running the firms, right. so like, from a as a firm leader running a firm 100%, i don’t want my firm to be in the news, because we’re the ones that did the thing that created the regulation, like absolutely not, i don’t want to be that person. but i also want to give enough innovative ability to our team to utilize some of these cutting edge tools, and to feel empowered, instead of just shut down, right. and so you know, if you are a big firm, if you’re a medium sized firm, if you’re a small, firm, and you’re like, hands off hard, stop not doing it, i don’t understand it, my staff can’t use it, they’re not gonna listen, they will use it, particularly the the i say kids, but like graduating from college right now, the young adults in our profession that are coming out, like they’re used to using these tools for their schoolwork. and quite frankly, they’re probably better attuned to what type of review is needed of those tools than even you are. and giving them 

donny shimamoto  34:07
right now and i totally got disagree with you on. so when you look at yesterday used to to using it, the meaning of the standards, the privacy and all of those, they definitely don’t have those we’re seeing when you look at the research, just even from a data breach standpoint, that this incoming generation is the is actually the highest at risk. and i’ve had the most incidences with them actually being subject to the themes because they don’t know they automatically trust the data. they trust the technology. so there’s actually a very strong need for education around what is actually appropriate and what we should be doing and not doing as part of their part 

liz mason  34:48
of bringing things back up and rephrase the kids coming out of school know how to use the tools better than you do. you know the industry you know the privacy you know what is allowed in your firm, you know what should be right and need to train them. 100% agree with that. and i think that there’s a whole level of training coming on how to appropriately use these tools and what the ethical side of it is. and i’m pretty sure danny’s already presented it 75 times, because that’s who he is. but if you back up and think about, like, think about when excel was first introduced, who was the best and most efficient at using that it was the younger generation. and it was my generation coming out of school, knowing how to do vi lookups, knowing how to program in visual basic to build our own tools. and my supervisors had no idea how the program worked in the like background of it when i was building vba code to do like analysis on different parts of audit workpapers. my supervisors were like, i don’t even understand how this works to make sure that it’s right. and so our conversations went into okay, let me show you how i did this calculation. let me like, explain the background, and then you tell me if it’s right, right. and i think that that translated a level up is where we’re at now, right? like they know what prompts to use these tools to get the answers they want. they know how to efficiently use these tools. they know how to integrate these tools, they know how to pull them together and do basically they’re lazy, like, let’s be honest, we’re all kind of lazy, at the core of it, we’re gonna use tech so that we don’t have to manually do it, right? well, i might understand the data privacy issues, the ethical issues, all the potential things that come out of it, but they understand foundationally in their dna, how to actually make the thing do the thing they want. that makes sense. 

donny shimamoto  36:38
and i completely agree with you on that point. the other the other thing that’s been interesting, just having been in a bunch of different conferences over the last couple of weeks is to hear the different perspectives on how people are doing this. and so i got to hear one login, large regional firm, their position was we’re blocking everything. and i was like, oh, my god, here we go, again, with the internet and search and, and like social media, like that didn’t work, then it’s not going to work. now. you need to let people use the tools, because they’ll find ways around it, whoever it is, whether it’s the young one or even the ones like we all know the workarounds now everybody also has their phones and things that they’re just gonna go to that. so it’s really, i think, awareness and understanding that we have to bridge across. i want to pull on something that you said, because i think it’s really, which is you said, yeah, we created the vba. and all that stuff i did, i would did that kind of to the important thing that we did. and i heard you say it was we explain to the higher ups, the manager, partner, whoever, this is what we did, and this is what it’s actually doing. and this is where i don’t think enough, let’s say senior managers partners up at that level, even cfos, i don’t think they’re taking the time to make sure that they understand at least at the conceptual level, what has been done or how the tools use, and that is actually goes back to provision part of our code of ethics. you cannot it’s actually called the politically called the intimidation, threat and intimidation of technology. so you cannot go that’s that technology, and it’s new, automatically going to trust and believe what it says. that’s called the intimidation threat to to ethic, and it’s part of our ethical code. so we need to make sure that the counter to that intimidation threat is that we are time to at least understand conceptually, what’s happening or what it did, and that you can then validate. and that’s that makes sense. and that’s something that we should do. 

liz mason  38:41
yeah, no. and that’s a really good thought. and i also believe, more people need to understand foundationally how these technologies work, so that they can do that. because it’s getting to the point where, you know, the tools become the user interfaces are so easy, that it’s very easy to skip the how does it actually do what it’s doing, so that you can appropriately review it and understand and not end up in, you know, the situation that lawyer did where he cited a case that doesn’t exist, because chechi beatty made it up. that’s why it’s called generative ai generate information. if there’s nothing there for it to pull on. it’s going to create something that sounds realistic to finish its program and finish its prompt and that’s terrifying. 

donny shimamoto  39:32
completely. so what how would you summarize this? what is what is our what’s our advice to people out there? 

liz mason  39:40
i think our tldr is, ai is here. generative ai is different than machine learning, but machine learning has been in the accounting space for a long time. our code of ethics should be considered at every point in time and we need to understand the tech review it and manage it appropriately making sure that we’re ahead of the regulatory environment and we’re advising on what it looks like for the future of our profession. 

donny shimamoto  40:08
awesome. i want to go feed the transcript that we said into an nc if you just said because that was a great summary of what we discussed. 

liz mason  40:19
well, i always appreciate your time and talking to you about the fun topics. 

donny shimamoto  40:24
and you as well as look forward to next time.