Reimagining Zero Trust in the Evolving Application Access with GenAI

Traditional Application Access:

Traditionally, applications, whether they are SaaS services or enterprise applications, are accessed via APIs such as RESTful APIs and gRPC. Even browser-based access to these applications leverages the APIs through JavaScript. Typically for every resource that the application exposes, there is a corresponding API endpoint with HTTP methods to perform various operations on the resource.

To illustrate, let us consider a simple SaaS HR application. This application manages various resources such as employees, departments, and payroll. Here are some of the key resources and their corresponding API endpoints:

  • Employees: GET /api/employees – Retrieve a list of all employees; POST /api/employees – Add a new employee; GET /api/employees/{id} – Retrieve details of a specific employee; PUT /api/employees/{id} – Update details of a specific employee; DELETE /api/employees/{id} – Remove a specific employee
  • Departments: GET /api/departments – Retrieve a list of all departments; POST /api/departments – Add a new department; GET /api/departments/{id} – Retrieve details of a specific department; PUT /api/departments/{id} – Update details of a specific department; DELETE /api/departments/{id} – Remove a specific department
  • Payroll: GET /api/payrolls – Retrieve a list of all payroll records; POST /api/payrolls – Create a new payroll record; GET /api/payrolls/{id} – Retrieve details of a specific payroll record; PUT /api/payrolls/{id} – Update details of a specific payroll record; DELETE /api/payrolls/{id} – Remove a specific payroll record

In this example, there are three API endpoints – employees, departments and payrolls.

Zero Trust Security

Zero trust security utilizing SASE/SSE and service mesh technologies operates with rules, where each rule provides context-specific access to API endpoints and methods. Context includes JWT claims of users accessing the resources; network context from which the user is accessing the application resources; device posture of the endpoint from which the user is accessing the application resources; user geographic locations; days of access; times of access; and other relevant factors.

Here are some sample rules illustrating Zero Trust Security for the HR application:

  1. Rule: Employee Data Access – HR Managers accessing from the corporate network on compliant devices can retrieve lists of employees (GET: /api/employees) and details of specific employees (GET: /api/employees/{id}).
  2. Rule: Department Management – Admins accessing from any network on compliant devices within the country on weekdays between 9 AM and 6 PM can add (POST: /api/departments), update (PUT: /api/departments/{id}), or remove (DELETE: /api/departments/{id}) department details.
  3. Rule: Payroll Record Creation – Payroll Specialists accessing from the corporate VPN on compliant devices within office premises can create (POST: /api/payrolls), update (PUT: /api/payrolls/{id}), retrieve details of (GET: /api/payrolls/{id}), or remove (DELETE: /api/payrolls/{id}) specific payroll records.

These rules ensure that access to resources is granted based on a combination of user identity, network context, device compliance, location, and time constraints, realizing the principles of Zero Trust Security.

Significance of API Endpoints and Methods

As you see, the granularity of API endpoints enables granular access controls. API endpoints reflect the resources of the applications, allowing for precise access management. Some applications may not use URL endpoints for each resource; instead, they may use HTTP request headers to indicate the resource being accessed, and in some cases, the resource itself is explicitly mentioned as part of the API JSON body.

In either case, APIs serve as interfaces to the resources play a crucial role in security. They enable middle security entities such as reverse proxies, API security gateways, and CASB/ZTNA of SASE/SSE systems to extract the resource being managed from the HTTP protocol elements. These systems then apply context-specific rules to determine whether to allow that HTTP session to proceed further. This process ensures that every request is evaluated in real-time based on the current context, effectively mitigating risks and protecting sensitive data.

GenAI is changing the way applications are accessed

SaaS and enterprise applications are evolving to accept natural language inputs for resource management. This means having a single API endpoint that can interpret and process natural text, audio, or images to perform operations on resources. For instance, an HR application might have one API endpoint where users can input their intent in natural language. The application would then interpret this intent, execute the necessary operations internally, and return the results. If additional clarity or input is needed, the system can prompt the user for more information.

Here are some example natural prompts illustrating how various operations can be performed:

  • Natural Prompt: “Show me the list of all employees”
  • Natural Prompt: “Give me the details of employee ID 123” 2. Department Management
  • Natural Prompt: “Add a new department named ‘Marketing’”
  • Natural Prompt: “Update the name of department ID 456 to ‘Sales’”
  • Natural Prompt: “Remove the department with ID 789” 3. Payroll Record Creation:
  • Natural Prompt: “Create a payroll record for employee ID 123”
  • Natural Prompt: “Update the payroll record ID 456 with new details”
  • Natural Prompt: “Show me the payroll record for ID 789”
  • Natural Prompt: “Delete the payroll record ID 101112”

These examples show that natural language prompts can replace the need to directly call resource-level APIs.

Evolution of Applications

Before talking about security challenges, let us visit some of the ways applications are evolving.

Traditional applications featuring their own GenAI interfaces

Traditional applications with natural language-based access are transforming how users interact with software. Existing SaaS and enterprise applications still provide traditional resource-level API access; however, they are now enhanced to support generative AI-based access. This approach allows users to manage resources through natural language prompts, making interactions more intuitive and user-friendly.

These applications often include their own graphical user interfaces (GUIs) for chatting, which enable a large memory context to relate various related prompts. This means the software can understand and remember the flow of conversation, providing more accurate and relevant responses to user queries.

By integrating natural language processing, these applications can break down complex operations into simple, conversational commands. For instance, instead of manually entering data or using complex API calls, users can simply say, “Create a payroll record for employee ID 123,” and the system will understand and execute the request seamlessly.

Traditional applications are augmented with solutions from established Generative AI providers

In recent advancements, generative AI service providers like ChatGPT are enhancing traditional applications by incorporating features such as function calling and plugins. These upgrades enable existing applications to interact seamlessly with ChatGPT and similar interfaces. Essentially, GenAI service providers capture user prompts through their interfaces, interpret the underlying intent, and identify the appropriate plugins to fulfill these requests by generating and executing RESTful API calls or via new prompt to backend systems. The outcome is then communicated back to the user in natural language, making the process intuitive and user-friendly.

For example, instead of navigating through multiple interfaces to perform operations, users can remain within the ChatGPT environment and execute commands through a simple conversational interface. ChatGPT, which already supports over 800 plugins, allows various SaaS application providers to register their services as plugins. This integration means users do not need to switch tabs or leave the ChatGPT interface to perform tasks covered by these plugins.

AI Agents and Agentic workflows

As applications continue to evolve, AI Agents with agentic workflows are emerging as another transformative trend. These AI Agents are designed to operate autonomously, managing tasks and workflows on behalf of users with minimal intervention. By leveraging advanced machine learning algorithms and sophisticated AI models, these agents can execute a wide variety of tasks, making them indispensable in modern application ecosystems.

AI Agents integrate seamlessly into existing applications, enhancing their capabilities by automating routine tasks and optimizing workflows. For example, in a customer service application, an AI Agent can autonomously handle common queries, process requests, and even escalate complex issues to human agents when necessary. This not only improves efficiency but also ensures a consistent and high-quality user experience.

These agents are equipped with the ability to learn from interactions, continually improving their performance and adapting to user preferences. This means that over time, AI Agents can anticipate user needs and proactively manage tasks, further simplifying operations and reducing the cognitive load on users. They can interface with various systems and applications, creating a cohesive and integrated environment where data and actions flow seamlessly.

One of the key advantages of AI Agents is their ability to operate within defined constraints and objectives, often referred to as agentic workflows. This allows them to execute tasks in a goal-oriented manner, ensuring that actions are aligned with user-defined priorities and organizational objectives. For instance, an AI Agent in a project management tool can autonomously assign tasks, track progress, and provide updates based on the project’s goals and timelines.

AI agents can assist in coordination and communication among different applications and services. They function as intermediaries that interpret natural language prompts and translate them into executable actions across multiple systems. AI agents use reasoning models to create plans consisting of various tasks based on user intent and then execute these tasks by communicating with different plugins and applications. This capability is particularly beneficial in environments where users need to interact with multiple tools and platforms.

Conventional Zero Trust Security implementations are inadequate

As described in the “Zero Trust Security” section, conventional zero trust network security implementations assume that all access to applications is via resource-specific APIs and conventional HTTP actions. Existing CASB and ZTNA solutions fall within these conventional zero trust security implementations.

With the advent of GenAI and its natural language-based interactions, security implementations need to evolve. Traditional APIs and JSON/XML bodies are being supplemented with natural language inputs and outputs, necessitating a deeper analysis to discern user intent and results. Security rules can no longer rely solely on specific API endpoints and body elements; they must now be developed with an understanding of user intent to apply content & context-specific, granular access controls.

For instance, imagine a user interacting with a payroll application via ChatGPT or with its own chat interface. Instead of a direct API call with fixed parameters, the user might say, “Show me the payroll records for the last quarter.” The security system must interpret this natural language request and ensure that the request & response adheres to the company’s security policies. This requires a nuanced understanding of both the request’s context and the specific security constraints involved.

Therefore, evolving zero trust security models must include mechanisms to analyze and understand natural language inputs, ensuring that access controls remain robust and contextually relevant in the era of GenAI.

About the author

Srini Addepalli
Srini Addepalli is a security and Edge computing expert with 25+ years of experience. Srini has multiple patents in networking and security technologies. He holds a BE (Hons) degree in Electrical and Electronics Engineering from BITS, Pilani in India.