Why validation, privacy, safe architecture, and security thinking matter more in the AI age.
1. The More AI You Add, the More Carefully User Input Must Be Handled As AI becomes easier to integrate into applications, the way we think about security becomes even more important. Even in traditional web applications, handling user input safely has always been essential. Forms, search boxes, comment sections, contact pages, login screens, and admin dashboards all come with risks whenever users are allowed to enter data. However, in applications that include AI, these risks become even broader. This is because user input in AI-powered applications is not always just stored or displayed as plain text. It may also be used as a prompt for an AI model, a request to an external API, a database search query, source material for email generation, summarization, recommendation logic, or other internal processing. In other words, what a user enters can influence many different parts of the application. For example, an application might classify a contact form message using AI. It might summarize customer reviews. It might generate improvement suggestions based on a user’s business information. It might allow users to ask questions through an AI chat interface. It might analyze uploaded documents with AI. These features are extremely useful. At the same time, they require careful decisions about how much user input can be trusted, where validation should happen, and what information should be passed to the AI model. This is where validation becomes important. Validation is not only about checking whether an input field is empty. It is the first line of defense for protecting an application. It includes character limits, format checks, filtering unexpected values, preventing malicious HTML or scripts, controlling submission frequency, and rejecting inputs that are longer than necessary. This is especially important for AI features. Inputs that are too long or intentionally crafted can lead to higher costs, slower processing, unexpected outputs, or issues such as prompt injection. For that reason, even when using AI, the basic principle is the same as ordinary web security: never trust user input by default. Do not use user input as-is. Normalize it into the expected format. Reject dangerous values. Always validate on the server side. Before sending anything to AI, confirm that it is safe and appropriate to send. Carefully applying these fundamentals becomes even more important in AI-era application development. AI makes it possible to add powerful features relatively easily, but it also makes the interpretation and processing of user input more complex. That is why, the more AI is integrated into an application, the more important it becomes to design input handling, validation, limits, logging, and error behavior from the beginning. Security is not a checklist that should be added after the application is finished. It is part of the application design itself, starting from the moment user input is received.
2. XSS, CSRF, Rate Limiting, and API Key Management Are Part of Product Design Security measures may sometimes look like small implementation details that can be added later. But in reality, that is not the case. Validation, XSS protection, CSRF protection, rate limiting, API key management, authentication, authorization, logging, and privacy protection are all important design elements that affect the foundation of an application. In web applications, XSS protection is especially important. XSS is a vulnerability where scripts or dangerous HTML entered by a user can be executed in another user’s browser. For example, if a comment section, profile field, contact message, Markdown renderer, or rich text editor displays user input directly as HTML, it can become dangerous. React escapes strings by default, which makes it relatively safer to handle user-generated text. However, extra care is required when directly inserting HTML with something like dangerouslySetInnerHTML, or when rendering content fetched from external sources. Even in a blog or portfolio site, there are many places where user input or external data may be handled, such as article content, image URLs, external links, embedded content, and contact forms. For that reason, it is important not to assume that “a personal site is safe enough.” Once a site is public, it should be treated as a web application that needs a minimum level of protection. CSRF protection is also important. CSRF is an attack that abuses a user’s logged-in state to send unintended requests. This becomes especially important for operations that change state, such as creating posts, updating data, deleting records, sending emails, or changing settings. Modern frameworks and authentication services often include some level of CSRF protection. However, it is still important not to rely on them blindly. We need to understand which parts of our own application perform state-changing operations and which endpoints could potentially be accessed from outside. Rate limiting is also important in AI applications. AI APIs are powerful, but each request has a cost. If an API can be called without any restriction, malicious repeated requests or bots could generate a large number of API calls in a short period of time. This is not only a security issue. It is also directly connected to cost, availability, and user experience. For example, an application might classify contact form messages with AI. It might generate suggestion text from user input. It might provide a chat feature. For these kinds of features, it is necessary to consider limits such as the number of requests per user, IP-based restrictions, whether the user is authenticated, input length, and the number of failed attempts. API key management is also extremely important. When using external services such as the OpenAI API, Supabase, Resend, Stripe, or Cloudflare, mishandling API keys or secret keys can create serious risks. Do not embed secret keys on the client side. Manage them with .env files. Do not accidentally commit them to GitHub. Separate keys that are safe to expose from keys that must only be used on the server. Grant only the minimum necessary permissions. Make sure leaked keys can be rotated quickly. These practices are necessary regardless of the size of the application. This is especially important in frameworks like Next.js, where client-side and server-side code exist within the same project. Environment variables prefixed with NEXT_PUBLIC_ are exposed to the browser, so secret information must never be placed there. Keys that should only be handled on the server side need to be contained within Server Actions, Route Handlers, server-only modules, or similar server-side boundaries. Security is not a single feature. If you build a form, validation is needed. If you display data, XSS protection is needed. If you change state, CSRF protection, authentication, and authorization need to be considered. If you use external APIs, API key management is needed. If you use AI, rate limiting, input control, and privacy design are needed. In other words, security is not something placed around the edges of a product. It belongs at the center of product design.
3. The More Powerful an Application Becomes, the More Important Safe Architecture Becomes As an application becomes more powerful, the amount of information it handles also increases. User input. External APIs. Personal data. Contact form messages. Email addresses. Logs. Images. Databases. Prompts sent to AI. Generated results returned by AI. When handling this kind of information, it is not enough to simply make the feature work. We also need to consider what could happen during failures, misuse, or abuse. This is especially important in applications that integrate AI, because the boundaries between input, processing, and output can easily become unclear. User input is passed to AI. AI generates text. The generated text is displayed on the screen. In some cases, it may be used as an email draft or a suggestion. Information from external services may be fetched and sent to AI. Data from a database may be referenced and reflected in the AI output. In this kind of flow, we need to design where data should be validated, how much information should be sent to AI, and whether AI output can be trusted as-is. AI output is useful, but it is not always correct. It may contain inaccurate information. It may sound too definitive. It may also include content that should not be shown to users. For that reason, AI output should also be treated as something that needs validation, formatting, and review. This is especially important when designing a system that uses AI-generated text for emails. Sending AI-generated text automatically without review should be handled very carefully. Add a human review flow. Allow editing before sending. Keep logs of generated content. Check for prohibited words or risky expressions. Clarify the recipient and the target data. These kinds of safety measures are important. Privacy is also essential. Information sent to AI may include personal data or sensitive content. For that reason, it is important not to send more information than necessary. For example, when handling names, email addresses, addresses, inquiry details, business information, or customer data, we need to ask whether that information truly needs to be sent to AI. Send only the necessary information. Mask personal data when possible. Limit what is stored in logs. Minimize the data sent to external APIs. Consider data retention periods. Design the system so data can be deleted when needed. These ideas are especially important in AI applications. Safe architecture is not simply about installing a security library. It means understanding the flow of data, assuming that input cannot be trusted by default, minimizing permissions, and designing the system so that damage does not spread when something goes wrong. For example, do not allow the public site to freely write directly to the database. Perform necessary checks on the server side. Fetch only the data that is safe to expose. Separate admin operations from general users. Keep API keys on the server side. Limit integration with external services to what is necessary. Apply restrictions to abnormal requests. These design decisions support the safety of the entire application. Security was also an important theme in this portfolio project. Blog content is managed with Supabase, and the public site only reads articles with the published status. General users are not given write permissions. Input values are validated in the contact form. Email sending is handled on the server side. API keys are not exposed to the client. Even if AI features are added in the future, the design will be based on human review rather than automatic sending. By thinking about security from the stage of building each feature, it becomes less necessary to force security measures into the system later. Applications in the AI era will become more powerful than ever. They can generate text. They can summarize information. They can analyze external data. They can create personalized suggestions for each user. They can automate parts of business workflows. However, as applications become more powerful, the responsibility of the designer and developer also becomes greater. What should be automated? Where should human review be included? What data should be handled? What information should be stored? Which processes should run on the server side? Where should limits be applied? Thinking through these questions is, in my view, part of security design in the AI era. Security is not something to check at the end of development as a simple checklist. It is connected to application architecture, data flow, user experience, operations, and external service integration. Because we are entering an era where AI is being integrated into more applications, security becomes even more important. To provide powerful features safely, validation, XSS protection, CSRF protection, rate limiting, API key management, privacy protection, and safe architecture all need to be considered as part of product design from the very beginning.