{"id":5849,"date":"2024-10-31T15:46:23","date_gmt":"2024-10-31T15:46:23","guid":{"rendered":"https:\/\/tech.newat9.com\/index.php\/2024\/10\/31\/does-genai-impose-a-creativity-tax\/"},"modified":"2024-10-31T15:46:23","modified_gmt":"2024-10-31T15:46:23","slug":"does-genai-impose-a-creativity-tax","status":"publish","type":"post","link":"https:\/\/tech.newat9.com\/index.php\/2024\/10\/31\/does-genai-impose-a-creativity-tax\/","title":{"rendered":"Does GenAI Impose a Creativity Tax?"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<div class=\"article-left-col\">\n<section class=\"article-topics\">\n<h4 class=\"article-topics__title\">Topics<\/h4>\n<ul class=\"article-topics__list\">\n<li class=\"article-topics__item\">\n                <a href=\"https:\/\/sloanreview.mit.edu\/topic\/managing-technology\/\" target=\"_blank\" rel=\"noopener\">Managing Technology<\/a>\n            <\/li>\n<li class=\"article-topics__item\">\n                <a href=\"https:\/\/sloanreview.mit.edu\/topic\/ai-machine-learning\/\" target=\"_blank\" rel=\"noopener\">AI &amp; Machine Learning<\/a>\n            <\/li>\n<\/ul>\n<\/section>\n<section class=\"article-section\">\n<h4 class=\"article-section__title\">Frontiers<\/h4>\n<p>\n            An <cite>MIT SMR<\/cite> initiative exploring how technology is reshaping the practice of management.        <\/p>\n<p>        <a href=\"https:\/\/sloanreview.mit.edu\/big-ideas\/frontiers\/\" class=\"article-section__link\" target=\"_blank\" rel=\"noopener\"><\/p>\n<p>           More in this series<br \/>\n                      <\/a><\/p>\n<\/section><\/div>\n<aside class=\"article-ad ad-300  ad-300x250 ad-desktop\">\n<\/aside>\n<aside class=\"article-ad ad-300  ad-300x250 ad-mobile\">\n<\/aside>\n<figure class=\"article-inline\">\n<img decoding=\"async\" alt=\"\" src=\"https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2024\/10\/2025WINTER_Castro-1290x860-1.jpg\"\/><img decoding=\"async\" src=\"https:\/\/sloanreview.mit.edu\/wp-content\/uploads\/2024\/10\/2025WINTER_Castro-1290x860-1.jpg\" alt=\"\"\/><figcaption>\n<p class=\"attribution\">Chris Gash\/theispot.com<\/p>\n<\/figcaption><\/figure>\n<p>Generative AI systems that model language have shown remarkable proficiency at a variety of tasks, and employees have embraced them to speed up writing and software development work in particular. The productivity boosts promised by these tools, such as ChatGPT, are leading many managers to incorporate them into workflows. However, our research reveals that the potential efficiency improvements come with a potential downside.<\/p>\n<p>Overreliance on AI may discourage employees from expressing their specific know-how and coming up with their own ideas, and could also result in increasingly homogenized outputs that limit the advantages of employee diversity. In the long term, this could diminish innovation and originality. Managers seeking to gain efficiencies via large language models (LLMs) will need to help employees thoughtfully balance productivity and creativity in their collaboration with AI.<\/p>\n<div class=\"news-signup news-signup--blue news-signup-47405 news-signup--inline\" id=\"news-signup-47405-1171334251\">\n<div class=\"news-signup__container\">\n<h2 class=\"news-signup__title title-47405\"\/>\n<p>Get Updates on Leading With AI and Data<\/p>\n<p class=\"news-signup__copy copy-47405\">Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.<\/p>\n<p class=\"news-signup__error error-47405-1171334251\">Please enter a valid email address<\/p>\n<p class=\"news-signup__success success-47405-1171334251\">Thank you for signing up<\/p>\n<p class=\"news-signup__link\"><a href=\"https:\/\/sloanreview.mit.edu\/privacy-policy\/\" target=\"_blank\" rel=\"noopener\">Privacy Policy<\/a><\/p>\n<\/p><\/div>\n<\/div>\n<h3>The Trade-Off Between Originality and Effort<\/h3>\n<p>AI-generated content impressively mimics the linguistic fluency of human-created content but typically lacks a specific user\u2019s stylistic choices and the original thinking that they would naturally express when accomplishing the task without AI. Aligning AI outputs to successfully capture the intention of human outputs can require iterative and time-consuming prompt refinement that users may decide is not worth it if the AI\u2019s early output is considered good enough. Thus, users face a decision: Invest time in customizing generative AI suggestions to progressively reflect more of their unique style and know-how \u2014 a process that can eat up productive time \u2014 or settle for somewhat suboptimal first drafts.<\/p>\n<p>Consider a team of software engineers collaborating on a large-scale software project. As they work on the code base, each team member will make coding and documentation decisions that are in line with agreed-upon standards but are also driven by each individual\u2019s own experience and preferences regarding object architecture, function naming, testing choices, and so on. Just as writers of prose aim to craft brilliant turns of phrase, software engineers strive to develop elegant and original solutions to coding problems.<\/p>\n<div class=\"callout-pullquote callout-pullquote--no-quote\" data-aos-duration=\"900\" data-aos-anchor-placement=\"bottom-bottom\" data-aos-easing=\"ease-out-back\" data-aos=\"fade-new-left\">\n<p class=\"callout-pullquote__quote\">\n\t\t\t\t\tToo much focus on productivity goals and deadlines may encourage employees to accept more generic generative AI outputs.\n\t\t\t\t\t<\/p>\n<\/div>\n<p>When productivity is prioritized, LLM-based tools such as GitHub Copilot make it easy to quickly generate a draft or autocomplete large blocks of code. This can save a lot of time, given that the tools often write decent code and can quickly improve existing code. However, the AI\u2019s first draft might not reflect the team\u2019s best practices or an engineer\u2019s know-how and style. While engineers can refine their AI prompts or edit the code manually to improve it so it\u2019s more faithful to their intent, doing so will slow them down. However, neglecting to do so can have negative implications for future productivity: Later, when a programmer needs to come back to the code to fix a bug or make improvements, the costs and effort of addressing any shortcomings may be significantly higher. Indeed, <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3491101.3519665\" target=\"_blank\" rel=\"noopener\">research<\/a> has found that individuals often struggle to review and adapt code generated by AI; in some cases, it might be more efficient to start from scratch if changes are needed.<\/p>\n<p>The consequent risk for managers is that putting too much focus on productivity goals and hard-to-meet deadlines may encourage employees to eschew the extra effort and simply accept more generic outputs. This could have significant negative repercussions: A 2023 study found <a href=\"https:\/\/www.gitclear.com\/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality\" target=\"_blank\" rel=\"noopener\">that LLM-based coding tools can diminish code quality<\/a> and maintainability.<\/p>\n<p>This scenario is one already faced by workers leaning on generative AI tools to do their writing for them. AI integration with enterprise software suites has made it easy to task LLMs with writing emails, generating reports, or designing presentation slides. In such cases, users face a similar trade-off between accepting output that they might judge to be suboptimal in terms of accuracy or writing originality, and making the extra effort to <a href=\"https:\/\/sloanreview.mit.edu\/article\/want-better-genai-results-try-speed-bumps\/\" target=\"_blank\" rel=\"noopener\">coax better results out of the tools via prompt refinements<\/a>. Time pressure is likely to figure large in these user choices. There is no free lunch here: The more time users spend editing the content themselves or refining iterative prompts, the closer the tool\u2019s output will be to users\u2019 preferences and standards. If they routinely accept the initial AI output, the organization will accumulate content \u2014 or code \u2014 that doesn\u2019t really reflect the know-how and expertise for which employers value talented performers.<\/p>\n<p>In a working paper, we introduced a simple mathematical model tailored to re-create and capture <a href=\"https:\/\/arxiv.org\/abs\/2309.10448\" target=\"_blank\" rel=\"noopener\">key aspects of human-AI interactions<\/a>. Below, we describe what it teaches us about the potential consequences of broad AI adoption.<\/p>\n<aside class=\"article-ad ad-300  ad-300x600 ad-desktop\">\n<\/aside>\n<aside class=\"article-ad ad-300  ad-300x250 ad-mobile\">\n<\/aside>\n<h3>Putting Creativity at Risk<\/h3>\n<p>Our research suggests that as people try to balance the trade-off between getting optimal output and working most efficiently when interacting with AI, diversity of thought and creativity tend to be lost. Defaulting to the tool\u2019s unmodified output can result in content that is more homogenous than what would be created by individual humans. If everyone\u2019s emails were written by Microsoft Copilot, for example, they would likely all sound similar. Such homogeneity at scale can put at risk the originality and diversity of ideas and content that are essential for growth and innovation.<\/p>\n<p>This issue of homogenization intensifies when AI-generated content is used to train subsequent AI models. The rational use of this new technology and the AI\u2019s learning process can create a feedback loop, potentially leading to a homogenization death spiral \u2014 in which the AI-generated content loses diversity. This concern is heightened as more AI-generated content finds its way into the pools of data used to train LLMs, whether it be proprietary organizational content or material on the internet. If the web becomes saturated with AI-generated content and we increasingly incorporate AI into our workflow and content generation processes, the creativity and diversity of our ideas will be significantly reduced. Some researchers have made the case that LLMs training on more LLM-generated content than human-generated content could even <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2305.17493\" target=\"_blank\" rel=\"noopener\">lead to the collapse of LLM models<\/a>.<\/p>\n<p>But at this point, because AI can competently generate a great deal of routine content, it may seem that losing diversity of thought is of little consequence, especially given the potentially large efficiency gains. However, the habit of defaulting to LLM outputs could have far-reaching implications for innovation, which depends on originality and creativity. Managers must balance the focus on productivity gains with ensuring that AI tools enhance rather than limit the ideas and perspectives expressed in work products.<\/p>\n<div class=\"callout-pullquote callout-pullquote--no-quote\" data-aos-duration=\"900\" data-aos-anchor-placement=\"bottom-bottom\" data-aos-easing=\"ease-out-back\" data-aos=\"fade-new-left\">\n<p class=\"callout-pullquote__quote\">\n\t\t\t\t\tHomogeneity at scale can put at risk the originality and diversity of ideas and content essential for growth and innovation.\n\t\t\t\t\t<\/p>\n<\/div>\n<p>There are several ways that managers can gain generative AI\u2019s productivity benefits while also preserving creativity and diversity of thought. First, they should rethink productivity expectations. When evaluating the potential use of generative AI for a given task, managers should consider the nature and requirements of the task and how much oversight or original thought employees are expected to contribute. In some cases, employees may need more time to complete the task with AI.<\/p>\n<p>Enhancing human-AI interactions by enabling users to more easily guide, amend, and correct model output can play a crucial role in their success. For example, retrieval-augmented generation uses external knowledge bases to improve output accuracy. Comprehensive training in prompt engineering should also make it easier for users to convey their own ideas to shape more-original LLM outputs.<\/p>\n<hr class=\"break\"\/>\n<p>Historically, shifts in business, such as automation and offshoring, have transferred the burden of labor and routine tasks to machines or external parties. In turn, this has enabled businesses to increase their productivity and lower their costs. In contrast, while generative AI technology also promises productivity enhancements and reduced costs, it affects businesses in a different realm: that of ideas, content, and innovation. It can lessen our cognitive load in tasks like drafting routine documents or analyzing long reports. However, as we\u2019ve argued above, there are risks to outsourcing too much of our own original or critical thinking. We promote the use of AI as an assistant that enriches our lives and work rather than a substitute that erodes the richness of our individuality and the diversity of our thoughts.<\/p>\n<p>To mitigate these concerns, it is essential for leadership to guide their teams in using AI tools thoughtfully. Managers should encourage their employees to authentically express their distinct perspectives and actively contribute their creativity to the company. This will not only ensure that AI systems are better utilized for realizing efficiency gains while maintaining originality; it will also guard against the potential pitfalls of a homogenous, AI-influenced culture. Cultivating a balanced relationship between humans and AI, where both parties mutually complement each other, will be pivotal in navigating the evolving landscape of AI-driven production and creation within our businesses.<\/p>\n<aside class=\"article-ad ad-300  ad-300x250 ad-desktop\">\n<\/aside>\n<aside class=\"article-ad ad-300  ad-300x250 ad-mobile\">\n<\/aside>\n<div class=\"article-authors\" id=\"article-authors\">\n<h4 class=\"article-authors__title\">About the Authors<\/h4>\n<div class=\"article-authors__bio\">\n<p>Francisco Castro is an assistant professor of decisions, operations, and technology management at UCLA Anderson School of Management. Jian Gao is a doctoral student at UCLA Anderson. S\u00e9bastien Martin is an assistant professor of operations at the Kellogg School of Management at Northwestern University.<\/p>\n<\/div><\/div>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/sloanreview.mit.edu\/article\/does-genai-impose-a-creativity-tax\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Topics Managing Technology AI &amp; Machine Learning Frontiers An MIT SMR initiative exploring how technology is reshaping the practice of [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5850,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/posts\/5849"}],"collection":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/comments?post=5849"}],"version-history":[{"count":0,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/posts\/5849\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/media\/5850"}],"wp:attachment":[{"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/media?parent=5849"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/categories?post=5849"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tech.newat9.com\/index.php\/wp-json\/wp\/v2\/tags?post=5849"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}