Коммит 24456b38 создал по автору Jan Provaznik's avatar Jan Provaznik Зафиксировано автором George Koltsov
Просмотр файлов

Adds unique request ID to AI actions

* for each AI mutation it generates a unique ID
* this ID is also part of subscription message so clients can pair
  responses with original requests
владелец b373dd95
...@@ -1031,6 +1031,7 @@ Input type: `AiActionInput` ...@@ -1031,6 +1031,7 @@ Input type: `AiActionInput`
| ---- | ---- | ----------- | | ---- | ---- | ----------- |
| <a id="mutationaiactionclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. | | <a id="mutationaiactionclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
| <a id="mutationaiactionerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. | | <a id="mutationaiactionerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationaiactionrequestid"></a>`requestId` | [`String`](#string) | ID of the request. |
   
### `Mutation.alertSetAssignees` ### `Mutation.alertSetAssignees`
   
...@@ -11367,6 +11368,7 @@ Information about a connected Agent. ...@@ -11367,6 +11368,7 @@ Information about a connected Agent.
| Name | Type | Description | | Name | Type | Description |
| ---- | ---- | ----------- | | ---- | ---- | ----------- |
| <a id="airesponseerrors"></a>`errors` | [`[String!]`](#string) | Errors return by AI API as response. | | <a id="airesponseerrors"></a>`errors` | [`[String!]`](#string) | Errors return by AI API as response. |
| <a id="airesponserequestid"></a>`requestId` | [`String`](#string) | ID of the original request. |
| <a id="airesponseresponsebody"></a>`responseBody` | [`String`](#string) | Response body from AI API. | | <a id="airesponseresponsebody"></a>`responseBody` | [`String`](#string) | Response body from AI API. |
   
### `AlertManagementAlert` ### `AlertManagementAlert`
...@@ -19,6 +19,10 @@ class Action < BaseMutation ...@@ -19,6 +19,10 @@ class Action < BaseMutation
description: 'Indicates the response format.', description: 'Indicates the response format.',
default_value: :raw default_value: :raw
field :request_id, GraphQL::Types::String,
null: true,
description: 'ID of the request.'
def ready?(**args) def ready?(**args)
raise Gitlab::Graphql::Errors::ArgumentError, MUTUALLY_EXCLUSIVE_ARGUMENTS_ERROR if methods(args).size != 1 raise Gitlab::Graphql::Errors::ArgumentError, MUTUALLY_EXCLUSIVE_ARGUMENTS_ERROR if methods(args).size != 1
...@@ -35,6 +39,7 @@ def resolve(**attributes) ...@@ -35,6 +39,7 @@ def resolve(**attributes)
response = Llm::ExecuteMethodService.new(current_user, resource, method, options).execute response = Llm::ExecuteMethodService.new(current_user, resource, method, options).execute
{ {
request_id: response[:request_id],
errors: response.success? ? [] : [response.message] errors: response.success? ? [] : [response.message]
} }
end end
......
...@@ -16,6 +16,7 @@ class AiCompletionResponse < BaseSubscription ...@@ -16,6 +16,7 @@ class AiCompletionResponse < BaseSubscription
def update(*_args) def update(*_args)
{ {
response_body: object[:response_body], response_body: object[:response_body],
request_id: object[:request_id],
errors: object[:errors] errors: object[:errors]
} }
end end
......
...@@ -10,6 +10,10 @@ class AiResponseType < BaseObject ...@@ -10,6 +10,10 @@ class AiResponseType < BaseObject
null: true, null: true,
description: 'Response body from AI API.' description: 'Response body from AI API.'
field :request_id, GraphQL::Types::String,
null: true,
description: 'ID of the original request.'
field :errors, [GraphQL::Types::String], field :errors, [GraphQL::Types::String],
null: true, null: true,
description: 'Errors return by AI API as response.' description: 'Errors return by AI API as response.'
......
...@@ -30,11 +30,19 @@ def perform ...@@ -30,11 +30,19 @@ def perform
raise NotImplementedError raise NotImplementedError
end end
def perform_async(user, resource, action_name, options)
request_id = SecureRandom.uuid
options[:request_id] = request_id
::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, action_name, options)
success(request_id: request_id)
end
def ai_integration_enabled? def ai_integration_enabled?
Feature.enabled?(:openai_experimentation) Feature.enabled?(:openai_experimentation)
end end
def success(data = nil) def success(data = {})
ServiceResponse.success(payload: data) ServiceResponse.success(payload: data)
end end
......
...@@ -20,9 +20,7 @@ def valid? ...@@ -20,9 +20,7 @@ def valid?
def perform def perform
return error('The messages are too big') if messages_are_too_big? return error('The messages are too big') if messages_are_too_big?
::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :explain_code, options) perform_async(user, resource, :explain_code, options)
success
end end
def messages_are_too_big? def messages_are_too_big?
......
...@@ -5,14 +5,7 @@ class ExplainVulnerabilityService < BaseService ...@@ -5,14 +5,7 @@ class ExplainVulnerabilityService < BaseService
private private
def perform def perform
::Llm::CompletionWorker.perform_async( perform_async(user, resource, :explain_vulnerability, options)
user.id,
resource.id,
resource.class.name,
:explain_vulnerability,
options
)
success
end end
end end
end end
...@@ -16,8 +16,7 @@ def valid? ...@@ -16,8 +16,7 @@ def valid?
private private
def perform def perform
::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :generate_description, options) perform_async(user, resource, :generate_description, options)
success
end end
end end
end end
...@@ -7,8 +7,7 @@ class GenerateSummaryService < BaseService ...@@ -7,8 +7,7 @@ class GenerateSummaryService < BaseService
private private
def perform def perform
::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :summarize_comments) perform_async(user, resource, :summarize_comments, options)
success
end end
def valid? def valid?
......
...@@ -11,9 +11,7 @@ def valid? ...@@ -11,9 +11,7 @@ def valid?
private private
def perform def perform
::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :generate_test_file, options) perform_async(user, resource, :generate_test_file, options)
success
end end
end end
end end
...@@ -9,9 +9,7 @@ def valid? ...@@ -9,9 +9,7 @@ def valid?
private private
def perform def perform
::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :tanuki_bot, options) perform_async(user, resource, :tanuki_bot, options)
success
end end
end end
end end
...@@ -22,7 +22,8 @@ def perform(user_id, resource_id, resource_class, ai_action_name, options = {}) ...@@ -22,7 +22,8 @@ def perform(user_id, resource_id, resource_class, ai_action_name, options = {})
return unless user.can?("read_#{resource.to_ability_name}", resource) return unless user.can?("read_#{resource.to_ability_name}", resource)
return unless resource.send_to_ai? return unless resource.send_to_ai?
ai_completion = ::Gitlab::Llm::CompletionsFactory.completion(ai_action_name.to_sym) params = { request_id: options.delete(:request_id) }
ai_completion = ::Gitlab::Llm::CompletionsFactory.completion(ai_action_name.to_sym, params)
ai_completion.execute(user, resource, options) if ai_completion ai_completion.execute(user, resource, options) if ai_completion
end end
......
# frozen_string_literal: true
module Gitlab
module Llm
module Completions
class Base
def initialize(ai_prompt_class, params = {})
@ai_prompt_class = ai_prompt_class
@params = params
end
private
attr_reader :ai_prompt_class, :params
end
end
end
end
...@@ -30,11 +30,11 @@ class CompletionsFactory ...@@ -30,11 +30,11 @@ class CompletionsFactory
} }
}.freeze }.freeze
def self.completion(name) def self.completion(name, params = {})
return unless COMPLETIONS.key?(name) return unless COMPLETIONS.key?(name)
service_class, prompt_class = COMPLETIONS[name].values_at(:service_class, :prompt_class) service_class, prompt_class = COMPLETIONS[name].values_at(:service_class, :prompt_class)
service_class.new(prompt_class) service_class.new(prompt_class, params)
end end
end end
end end
......
...@@ -4,24 +4,16 @@ module Gitlab ...@@ -4,24 +4,16 @@ module Gitlab
module Llm module Llm
module OpenAi module OpenAi
module Completions module Completions
class ExplainCode class ExplainCode < Gitlab::Llm::Completions::Base
def initialize(ai_prompt_class)
@ai_prompt_class = ai_prompt_class
end
def execute(user, project, options) def execute(user, project, options)
options = ai_prompt_class.get_options(options[:messages]) options = ai_prompt_class.get_options(options[:messages])
ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options) ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options)
::Gitlab::Llm::OpenAi::ResponseService.new(user, project, ai_response, options: {}).execute( ::Gitlab::Llm::OpenAi::ResponseService
Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new .new(user, project, ai_response, options: { request_id: params[:request_id] })
) .execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new)
end end
private
attr_reader :ai_prompt_class
end end
end end
end end
......
...@@ -4,32 +4,27 @@ module Gitlab ...@@ -4,32 +4,27 @@ module Gitlab
module Llm module Llm
module OpenAi module OpenAi
module Completions module Completions
class ExplainVulnerability class ExplainVulnerability < Gitlab::Llm::Completions::Base
DEFAULT_ERROR = 'An unexpected error has occurred.' DEFAULT_ERROR = 'An unexpected error has occurred.'
def initialize(template_class)
@template_class = template_class
end
def execute(user, vulnerability, _options) def execute(user, vulnerability, _options)
template = template_class.new(vulnerability) template = ai_prompt_class.new(vulnerability)
response = response_for(user, template) response = response_for(user, template)
::Gitlab::Llm::OpenAi::ResponseService ::Gitlab::Llm::OpenAi::ResponseService
.new(user, vulnerability, response, options: {}) .new(user, vulnerability, response, options: { request_id: params[:request_id] })
.execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new) .execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new)
rescue StandardError => error rescue StandardError => error
Gitlab::ErrorTracking.track_exception(error) Gitlab::ErrorTracking.track_exception(error)
::Gitlab::Llm::OpenAi::ResponseService ::Gitlab::Llm::OpenAi::ResponseService
.new(user, vulnerability, { error: { message: DEFAULT_ERROR } }.to_json, options: {}) .new(user, vulnerability, { error: { message: DEFAULT_ERROR } }.to_json,
options: { request_id: params[:request_id] })
.execute .execute
end end
private private
attr_reader :template_class
def response_for(user, template) def response_for(user, template)
client_class = ::Gitlab::Llm::OpenAi::Client client_class = ::Gitlab::Llm::OpenAi::Client
client_class client_class
......
...@@ -4,17 +4,13 @@ module Gitlab ...@@ -4,17 +4,13 @@ module Gitlab
module Llm module Llm
module OpenAi module OpenAi
module Completions module Completions
class GenerateDescription class GenerateDescription < Gitlab::Llm::Completions::Base
TOTAL_MODEL_TOKEN_LIMIT = 4000 TOTAL_MODEL_TOKEN_LIMIT = 4000
INPUT_TOKEN_LIMIT = (TOTAL_MODEL_TOKEN_LIMIT * 0.5).to_i.freeze INPUT_TOKEN_LIMIT = (TOTAL_MODEL_TOKEN_LIMIT * 0.5).to_i.freeze
INPUT_CONTENT_LIMIT = INPUT_TOKEN_LIMIT * 4 INPUT_CONTENT_LIMIT = INPUT_TOKEN_LIMIT * 4
def initialize(ai_prompt_class)
@ai_prompt_class = ai_prompt_class
end
def execute(user, issuable, options) def execute(user, issuable, options)
return unless user return unless user
return unless issuable return unless issuable
...@@ -45,13 +41,10 @@ def execute(user, issuable, options) ...@@ -45,13 +41,10 @@ def execute(user, issuable, options)
**options **options
) )
::Gitlab::Llm::OpenAi::ResponseService.new(user, issuable, ai_response, options: {}) ::Gitlab::Llm::OpenAi::ResponseService
.new(user, issuable, ai_response, options: { request_id: params[:request_id] })
.execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new) .execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new)
end end
private
attr_reader :ai_prompt_class
end end
end end
end end
......
...@@ -4,14 +4,10 @@ module Gitlab ...@@ -4,14 +4,10 @@ module Gitlab
module Llm module Llm
module OpenAi module OpenAi
module Completions module Completions
class GenerateTestFile class GenerateTestFile < Gitlab::Llm::Completions::Base
TOTAL_MODEL_TOKEN_LIMIT = 4000 TOTAL_MODEL_TOKEN_LIMIT = 4000
OUTPUT_TOKEN_LIMIT = (TOTAL_MODEL_TOKEN_LIMIT * 0.25).to_i.freeze OUTPUT_TOKEN_LIMIT = (TOTAL_MODEL_TOKEN_LIMIT * 0.25).to_i.freeze
def initialize(ai_prompt_class)
@ai_prompt_class = ai_prompt_class
end
def execute(user, merge_request, options) def execute(user, merge_request, options)
return unless user return unless user
return unless merge_request return unless merge_request
...@@ -22,14 +18,12 @@ def execute(user, merge_request, options) ...@@ -22,14 +18,12 @@ def execute(user, merge_request, options)
ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **ai_options) ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **ai_options)
options[:request_id] = params[:request_id]
::Gitlab::Llm::OpenAi::ResponseService.new(user, merge_request, ai_response, options: options).execute( ::Gitlab::Llm::OpenAi::ResponseService.new(user, merge_request, ai_response, options: options).execute(
Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new
) )
end end
private
attr_reader :ai_prompt_class
end end
end end
end end
......
...@@ -4,7 +4,7 @@ module Gitlab ...@@ -4,7 +4,7 @@ module Gitlab
module Llm module Llm
module OpenAi module OpenAi
module Completions module Completions
class SummarizeAllOpenNotes class SummarizeAllOpenNotes < Gitlab::Llm::Completions::Base
TOTAL_MODEL_TOKEN_LIMIT = 4000 TOTAL_MODEL_TOKEN_LIMIT = 4000
# 0.5 + 0.25 = 0.75, leaving a 0.25 buffer for the input token limit # 0.5 + 0.25 = 0.75, leaving a 0.25 buffer for the input token limit
...@@ -24,10 +24,6 @@ class SummarizeAllOpenNotes ...@@ -24,10 +24,6 @@ class SummarizeAllOpenNotes
# see https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them # see https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
INPUT_CONTENT_LIMIT = INPUT_TOKEN_LIMIT * 4 INPUT_CONTENT_LIMIT = INPUT_TOKEN_LIMIT * 4
def initialize(ai_prompt_class)
@ai_prompt_class = ai_prompt_class
end
def execute(user, issuable, _ = {}) def execute(user, issuable, _ = {})
return unless user return unless user
return unless issuable return unless issuable
...@@ -46,13 +42,10 @@ def execute(user, issuable, _ = {}) ...@@ -46,13 +42,10 @@ def execute(user, issuable, _ = {})
**options **options
) )
::Gitlab::Llm::OpenAi::ResponseService.new(user, issuable, ai_response, options: {}) response_options = { request_id: params[:request_id] }
::Gitlab::Llm::OpenAi::ResponseService.new(user, issuable, ai_response, options: response_options)
.execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new) .execute(Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new)
end end
private
attr_reader :ai_prompt_class
end end
end end
end end
......
...@@ -4,11 +4,7 @@ module Gitlab ...@@ -4,11 +4,7 @@ module Gitlab
module Llm module Llm
module OpenAi module OpenAi
module Completions module Completions
class TanukiBot class TanukiBot < Gitlab::Llm::Completions::Base
def initialize(ai_prompt_class)
@ai_prompt_class = ai_prompt_class
end
# After we remove REST API, refactor so that we use methods defined in templates/tanuki_bot.rb, e.g.: # After we remove REST API, refactor so that we use methods defined in templates/tanuki_bot.rb, e.g.:
# initial_prompt = ai_prompt_class.initial_prompt(question) # initial_prompt = ai_prompt_class.initial_prompt(question)
def execute(user, resource, options) def execute(user, resource, options)
...@@ -16,14 +12,11 @@ def execute(user, resource, options) ...@@ -16,14 +12,11 @@ def execute(user, resource, options)
response = ::Gitlab::Llm::TanukiBot.execute(current_user: user, question: question) response = ::Gitlab::Llm::TanukiBot.execute(current_user: user, question: question)
::Gitlab::Llm::OpenAi::ResponseService.new(user, resource, response, options: {}).execute( response_options = { request_id: params[:request_id] }
::Gitlab::Llm::OpenAi::ResponseService.new(user, resource, response, options: response_options).execute(
Gitlab::Llm::OpenAi::ResponseModifiers::TanukiBot.new Gitlab::Llm::OpenAi::ResponseModifiers::TanukiBot.new
) )
end end
private
attr_reader :ai_prompt_class
end end
end end
end end
......
Поддерживает Markdown
0% или .
You are about to add 0 people to the discussion. Proceed with caution.
Сначала завершите редактирование этого сообщения!
Пожалуйста, зарегистрируйтесь или чтобы прокомментировать