Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[don't merge]Adding test case with ai agent help #1911

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

SamYuan1990
Copy link
Collaborator

I know CNCF is not ban either AI Agent or AI copilot, but to be honest to myself, this PR may have risk with AI ethics. I just warning it at the beginning.

Here is how I make this PR.
1st https://github.com/SamYuan1990/kepler/actions/runs/12781054726/job/35628330740
2nd https://github.com/SamYuan1990/kepler/pull/22/files with content here, manual fix at my local.
3th submit the PR here.

as a POC of #1905
with Ricardo Aravena's help, I am going to share this PR on CNCF AI WG meeting at Jan 24 8am PT.

I hope @rootfs , @sunya-ch , @sthaha , @vprashar2929 , @vimalk78 you guys can help with your effort just view added golang file changes.

if we decided to merge the content, I will rebase the file with latest upstream, remove the dummy changes as Tasks.json(which just been invoked as config file for POC but not for kepler's repo).

SamYuan1990 and others added 10 commits January 15, 2025 10:45
Signed-off-by: Sam Yuan <yy19902439@126.com>
Signed-off-by: Sam Yuan <yy19902439@126.com>
Signed-off-by: Sam Yuan <yy19902439@126.com>
Signed-off-by: Sam Yuan <yy19902439@126.com>
Signed-off-by: Sam Yuan <yy19902439@126.com>
Signed-off-by: Sam Yuan <yy19902439@126.com>
Signed-off-by: Sam Yuan <yy19902439@126.com>
@SamYuan1990 SamYuan1990 changed the title Adding test case with ai agent help [don't merge]Adding test case with ai agent help Jan 15, 2025
Copy link
Contributor

github-actions bot commented Jan 15, 2025

🤖 SeineSailor

Here is a concise summary of the pull request changes:

Summary: This pull request introduces a proof-of-concept for adding test cases using an AI agent in the Kepler project. The changes include new test files for the Exporter package, AppConfig, HealthProbe, and RootHandler functions, as well as a test case for the HandleInactiveContainers function in the collector package.

Key Modifications:

  • Added a JSON file with tasks for generating unit tests and benchmark tests in Golang using the Ginkgo framework and regex matching.
  • Introduced new test files for the Exporter package, AppConfig, HealthProbe, and RootHandler functions.
  • Added a test case for the HandleInactiveContainers function in the collector package.

Impact on Codebase: The changes do not alter the signatures of exported functions, global data structures, or variables, and do not introduce any visible changes to the external interface or behavior of the code.

Observations/Suggestions:

  • The addition of test cases using an AI agent is a great step towards improving the project's test coverage and reliability.
  • It would be beneficial to consider integrating the AI agent-generated tests into the existing testing framework to ensure seamless execution and reporting.
  • Further review and refinement of the test cases may be necessary to ensure they are comprehensive and effective in covering the desired functionality.

@rootfs
Copy link
Contributor

rootfs commented Jan 22, 2025

@SamYuan1990 can you check if generated code is in line with CNCF guideline? cc @caniszczyk

@SamYuan1990
Copy link
Collaborator Author

@SamYuan1990 can you check if generated code is in line with CNCF guideline? cc @caniszczyk

well, where are the specific terms ?
and I also want to know if we make a github action, which:

  • figure out as which specific function leaking/need improve unit test coverage.
  • figure out as which specific function leaking/need improve code document coverage.
    and provide an optional feature as suggestion by LLM.
    or any kind of threshold, if the test coverage lower than threshold(for example 10%)... to enable feature as suggestion by LLM.
    makes user enable the feature by settings, also provides full log for trace.
    if the feature been enabled, then action will invoke LLM try to auto generate code/doc, then PR back as suggestion.
    after that, similar to a lint failure for dependency bot auto bump up, a maintainer will jump in, check LLM's suggestion, fix any breaks locally and get things done.
    in this case, which terms should we check?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants