-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(autoedit): combine inline completion provider and selection change #6147
Conversation
a934743
to
532640c
Compare
this.lastTextChangeTimeStamp && | ||
Date.now() - this.lastTextChangeTimeStamp < | ||
RESET_SUGGESTION_ON_CURSOR_CHANGE_AFTER_INTERVAL_MS | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this check have been helpful to ensure that we don't trigger the feature unnecessarily.
Removing it triggers the feature for every cursor movement even when I am just looking at the code. It could lead to more false positive and high load on the deployment.
I think this is a good heuristics to use, one initial change from the user gives us a indication that they might want to modify the code, and a cool down period of 60 seconds without no changes gives an indication that user might not want to modify the code or may just want to read.
Let me know your thoughts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This creates a hidden mode that users aren’t aware of, making it really hard to figure out intuitively. How does this look to users? They start editing and get helpful suggestions. Then, after making an edit, they spend more than 60 seconds searching for the next place to update, only to find they’re not getting suggestions in the same way they did a minute ago.
High load on the deployment
We’ve enabled completion preloading with a 150ms debounce time and haven’t experienced any critical load spikes from it. Would it make sense to try unconditional loading and see if we encounter throughput issues?
A cooldown period of 60 seconds without changes suggests the user might not want to modify the code
If we rely on suggestions being discarded upon cursor movement, we could implement a dynamic debounce timeout. This timeout could increase after several consecutive suggestions and reset to the minimum upon any document edit or suggestion acceptance. This way, we avoid introducing a new hidden control mode (e.g., "if you don’t edit for X seconds, we disable the feature"). Instead, it remains tied to cursor movement and typing but becomes less intrusive if the user signals they’re not interested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They start editing and get helpful suggestions. Then, after making an edit, they spend more than 60 seconds searching for the next place to update, only to find they’re not getting suggestions in the same way they did a minute ago.
I think this is a trade-off, I initially integrated the PR without this check and sometimes it would show me false positive even if I am just looking at the code and that that would be distracting. If the user does a text modification, we again start showing the suggestion on cursor movement, so this should be okay imo. We can decide the timeout as we play along with the feature or come up with a better heuristic but I think some logic is definitely required to not always trigger the feature on cursor movements but when the user intends to make a change.
const isPrefixMatch = prediction.startsWith(codeToReplace.codeToRewritePrefix) | ||
const isSuffixMatch = prediction.endsWith(codeToReplace.codeToRewriteSuffix) | ||
const isSuffixMatch = prediction.endsWith(codeToRewriteAfterCurrentLine) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also check if the currentLineSuffix
is a subsequence of the predictedLine
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a895003
to
087afda
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@valerybugakov Tried to simplify the flow from my comments in this PR: https://github.com/sourcegraph/cody/pull/6154/files
Please let me know if this looks okay, if yes we can merge the PR above into this PR
dbf6585
to
9b00022
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I addressed your comments, @hitesh-1997. Let's merge this change as-is. It achieves its primary goal by integrating autoedits with the InlineCompletionItemProvider
. We can amend the behavior based on Slack discussion in follow-up PRs.
Combine inline completion provider and selection change.
Test plan
CI