Skip to content

Conversation

@QuantumGhost
Copy link
Collaborator

@QuantumGhost QuantumGhost commented Jan 8, 2026

Reverts #30651

The original fix seems correct on its own. However, for chatflows with multiple answer nodes, the message_replace command only preserves the output of the last executed answer node.

The follow DSL is an example:

DSL
app:
  description: ''
  icon: 🤖
  icon_background: '#FFEAD5'
  mode: advanced-chat
  name: muitiple-answer-streaming
  use_icon_as_answer_icon: false
dependencies: []
kind: app
version: 0.5.0
workflow:
  conversation_variables: []
  environment_variables: []
  features:
    file_upload:
      allowed_file_extensions:
      - .JPG
      - .JPEG
      - .PNG
      - .GIF
      - .WEBP
      - .SVG
      allowed_file_types:
      - image
      allowed_file_upload_methods:
      - local_file
      - remote_url
      enabled: false
      fileUploadConfig:
        attachment_image_file_size_limit: 2
        audio_file_size_limit: 50
        batch_count_limit: 5
        file_size_limit: 15
        file_upload_limit: 10
        image_file_batch_limit: 10
        image_file_size_limit: 10
        single_chunk_attachment_limit: 10
        video_file_size_limit: 100
        workflow_file_upload_limit: 10
      image:
        enabled: false
        number_limits: 3
        transfer_methods:
        - local_file
        - remote_url
      number_limits: 3
    opening_statement: ''
    retriever_resource:
      enabled: true
    sensitive_word_avoidance:
      enabled: false
    speech_to_text:
      enabled: false
    suggested_questions: []
    suggested_questions_after_answer:
      enabled: false
    text_to_speech:
      enabled: false
      language: ''
      voice: ''
  graph:
    edges:
    - data:
        isInIteration: false
        isInLoop: false
        sourceType: start
        targetType: answer
      id: 1767856713829-source-1767856720538-target
      source: '1767856713829'
      sourceHandle: source
      target: '1767856720538'
      targetHandle: target
      type: custom
      zIndex: 0
    - data:
        isInIteration: false
        isInLoop: false
        sourceType: start
        targetType: code
      id: 1767856713829-source-1767856724231-target
      source: '1767856713829'
      sourceHandle: source
      target: '1767856724231'
      targetHandle: target
      type: custom
      zIndex: 0
    - data:
        isInLoop: false
        sourceType: code
        targetType: answer
      id: 1767856724231-source-answer-target
      source: '1767856724231'
      sourceHandle: source
      target: answer
      targetHandle: target
      type: custom
      zIndex: 0
    nodes:
    - data:
        selected: false
        title: User Input
        type: start
        variables: []
      height: 73
      id: '1767856713829'
      position:
        x: 79
        y: 282
      positionAbsolute:
        x: 79
        y: 282
      selected: true
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 242
    - data:
        answer: Answer 1
        selected: false
        title: Answer
        type: answer
        variables: []
      height: 100
      id: answer
      position:
        x: 680
        y: 303
      positionAbsolute:
        x: 680
        y: 303
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 242
    - data:
        answer: Answer 2
        selected: false
        title: Answer 2
        type: answer
        variables: []
      height: 100
      id: '1767856720538'
      position:
        x: 378
        y: 411
      positionAbsolute:
        x: 378
        y: 411
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 242
    - data:
        code: "\ndef main():\n    import time\n    time.sleep(1)\n    return {\n \
          \   }\n"
        code_language: python3
        outputs: {}
        selected: false
        title: Code
        type: code
        variables: []
      height: 52
      id: '1767856724231'
      position:
        x: 387
        y: 303
      positionAbsolute:
        x: 387
        y: 303
      selected: false
      sourcePosition: right
      targetPosition: left
      type: custom
      width: 242
    viewport:
      x: 235.59889940157154
      y: -27.649150847507997
      zoom: 1.0139594797900295
  rag_pipeline_variables: []

@QuantumGhost QuantumGhost requested a review from fatelei January 8, 2026 01:31
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @QuantumGhost, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request performs a full revert of a prior fix, specifically removing functionality in the advanced chat pipeline that handled dynamic updates to the ANSWER node's output. The reverted code previously ensured that if an ANSWER node's final output changed due to variable assignments during workflow execution, the UI would receive an update. Along with the core logic, all associated unit tests for this feature have also been removed, effectively rolling back the entire change.

Highlights

  • Revert of Previous Fix: This pull request fully reverts the changes introduced in langgenius/dify#30651, which aimed to fix an issue related to assigning default values.
  • Removal of ANSWER Node Logic: The logic within _handle_node_succeeded_event that checked for discrepancies between an ANSWER node's final output and the accumulated task_state.answer has been removed. This logic was responsible for sending message_replace events to the UI when variables were updated during workflow execution.
  • Removal of Associated Unit Tests: The dedicated unit test file test_generate_task_pipeline_answer_node.py, which covered various scenarios for the ANSWER node's message_replace logic, has been entirely removed.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request reverts a previous change that added logic to update the final answer in advanced chat applications after streaming. The changes consist of removing the specific logic for ANSWER nodes from _handle_node_succeeded_event in generate_task_pipeline.py and deleting the corresponding test file. The revert is implemented cleanly, with both the feature code and its dedicated tests being removed. I find no issues with the implementation of this revert.

@fatelei fatelei changed the title Revert "fix: fix assign value stand as default" fix: fix assign value stand as default revert Jan 8, 2026
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jan 8, 2026
@QuantumGhost QuantumGhost marked this pull request as ready for review January 9, 2026 08:01
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. 🌊 feat:workflow Workflow related stuff. labels Jan 9, 2026
@QuantumGhost QuantumGhost changed the title fix: fix assign value stand as default revert Revert "fix: fix assign value stand as default" Jan 9, 2026
@QuantumGhost QuantumGhost changed the title Revert "fix: fix assign value stand as default" fix: fix assign value stand as default Jan 9, 2026
@QuantumGhost QuantumGhost changed the title fix: fix assign value stand as default Revert "fix: fix assign value stand as default (#30651)" Jan 9, 2026
@QuantumGhost QuantumGhost changed the title Revert "fix: fix assign value stand as default (#30651)" Revert: "fix: fix assign value stand as default (#30651)" Jan 9, 2026
@QuantumGhost QuantumGhost changed the title Revert: "fix: fix assign value stand as default (#30651)" revert: "fix: fix assign value stand as default (#30651)" Jan 9, 2026
@QuantumGhost QuantumGhost merged commit ae0a26f into main Jan 9, 2026
20 of 24 checks passed
@QuantumGhost QuantumGhost deleted the revert-30651-issue-30650 branch January 9, 2026 08:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

🌊 feat:workflow Workflow related stuff. lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants