> Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.
- From an empathetic perspective I hope for the sake of the customers of raycast and for its employees that Microsoft is not into any kind of negotiations with Raycast at the moment.
I just want to note that the case you link to was 25 years ago. The number of people working at Microsoft at the time who are still working there today is very small.
The comment was brief, and added detail is welcome, but corporate mission/culture often extends over time even with changes in leadership. Partly because of what was accepted in the past.
Had I not seen this thread, I would have assumed they consented to it, and I'd never willingly interact with Raycast or it's team in any way. I still have a somewhat negative opinion, so I think it's safe to say there are damages.
As a data point, I consent to be counted as associating raycast with the Microsoft brand and viewing them negatively as a consequence of using pull requests as an advertising canvas.
I hear you, but honestly it’s kind of funny to think a company would send C&D to stop free advertising for them. I’d be surprising to see if any company ever does that, whatever the people think small brands worth they actually worth way less than that.
Microslop for a while now seems to be testing exactly how much you can abuse the user before they move somewhere else. Windows is a prime example. Everything is ads, tracking, popups, annoyances, etc.
They have got away with it for a while because a lot of users have largely been stuck, but they are in real trouble now with Apple providing meaningful competition.
Microsoft can show a screen-wide dick elarger ad instead of everyone's wallpaper and people will still be using windows for decades. They already know it.
Yeah but at least a dozen Microsoft employees went on a seemingly scripted blitz on X about how they’re ready to start listening to feedback and…
* checks notes *
Only have copilot shoehorned into most things instead of everything. And some shit about windows developers which isn’t exactly going to fix the glaring issues with the OS itself.
It's because of the way companies align their own behavior. "Listening to feedback" is just a good intention but increasing engagement with copilot is a measurable goal. With apologies to George Orwell, imagine an OKR stamping on a human face--forever.
Imagine just having the copilot extension installed will be an excuse at some point for them to steal our code to train their AI models. Not sure if they already do this.
> Copilot may include both automated and manual (human) processing of data. You shouldn’t share any information with Copilot that you don’t want us to review.
so they're reserving the right to process whatever it looks at.
You're sending them your codebase already, as part of the prompt for generating new snippets, debugging, etc. So they have access to it.
They'd be absolute fools not to be using the results of sessions to continue to refine their models, and they already reserved the rights to look at what you send them, so yeah - they're doing it.
Also for some reason that site hijacks your scrolling and tries to "smooth" it, which just makes it feel more unresponsive as most browsers already have smooth scrolling?
This is the core issue. These tools operate with very little transparency about what they're doing under the hood. Even basic stuff like how much of your session resources have been consumed is hidden from you in most tools.
You’re pointing to something entirely different: those are Copilot-created PRs. They can include anything Copilot wants to include. People using the Copilot PR feature know what they’re buying into.
OP is about Copilot doing post-hoc editing of a human-created PR to include an ad, allegedly without knowledge or approval of the creator (well I assume they did give their team member permission to update the PR body, but apparently not for this kind of crap).
It’s like how Disney Plus “ad free” tier shows you ads for Hulu and Disney Perks. They probably redefine “ad” in their terms of service so their own ads are called something else.
I looked into it at one point, as I was disgusted by the unskippable advertisements when paying for an ad-free tier on one of the myriad streaming platforms. Apparently, they distinguish between "advertisements" for a product or service and "promotions" for themselves. I get why that would be a reasonable internal distinction, as the former would require sign-off from the business paying for the advertisement, while the latter would only need internal approval, but it's a pointless distinction after that.
Leave the poor fellow alone. It's been butchered enough in the late 90s and early 00s, and has been repurposed for a greater good. I'd argue not all Microsoft creates is bad, it just needs someone else to make it better.
It's definitely an ad, I think the only real question is whether it's just marketing Copilot or whether part of their partnership with other companies is advertising the integration in this way. The links all go to Copilot docs pages on the integrations, so they're not typical tracked link advertising campaigns.
Honestly, it being a "tip" or "ad" is exactly the same.
What I mean is that even if I take that at face value and accept that it's not an ad, and I can just about see from a certain level of corporate brainwashing how one could believe that, it's still completely unacceptable.
Calling it a "tip" is definitely just a semantic trick to make it slightly less easy to frame a negative response and galvanise opinion against the practise. Reminds me a bit of confirmation shaming (which, now I think about it, I haven't seen in a while) where you're made to click a button that says something like "No, I don't want an amazing 15% off my next order by signing up to your email list".
I was playing Mario Party Jamboree this weekend with my kids, and when you use a key to unlock doors (for anyone not familiar, Mario Party is a family friendly virtual board game with lots of minigames that’s been around since the Nintendo 64) that serve as shortcuts in the game board, the key is alive and says “don’t you want to keep being friends? You wouldn’t use me on a door, would you?” Which is a humorous twist on confirmation shaming inside of the game and gives me a bit of enmity for the imaginary key.
Conversely, on Doom Dark Ages they got rid of the traditional difficulty mode of “I’m too young to die” which had a picture of Doom Guy with a bib and a pacifier, I think there’s some new industry guidance that it’s a no no to poke fun at people picking easy difficulties, or even indicating what difficulty the game was “designed to be played on” which Japanese game devs happily ignore.
I know these aren’t actual equivalents since your money isn’t used on the line and it’s purely a game state, buts it’s still an interesting and noteworthy transition.
I do think it's just an ad. Also it's a bad kind of one because 1) it disguises itself as a tip 2) makes people to think if it's an ad for Raycast or other services, when actually it's just promoting itself.
PRs aren't part of the repository (if you define repository to mean part of `git`'s internal working. It's part of GitHub, which is owned by Microsoft.
Small nit, but PR description bodies might wind up as part of a commit message verbatim, depending on repo settings and the merger's personal behavior. It's an easy outcome, the merger doesn't need to copy and paste or anything, and I think it might be a default or popular setting for squash-merges.
It’s a spot that will easily be replaced with paid ads, for sure. Not sure why it wouldn’t be better to just inject this sort of message into the UI instead of editing the PR text itself. (Except that the team implementing it probably couldn’t get the UI team to agree.)
This tip/ad discussion reminds me of the equally idiotic and misleading Facebook post types. Instead of the correctly labeling all ads as, well, ads, Facebook have some ads called "suggested for you", some are completely unlabeled with only a "follow" button to start following, some ads are labeled as "sponsored" etc. I think they are doing this to evade legal limitations they might have otherwise. Last time I used Facebook it showed me 25 ads in a row (I counted), without any of my hundreds of follows with active feeds. Truly insane company.
> Looks like MS thinks it's a "tip" rather than an ad.
No, they don't.
> edit: I think it's an ad too. Everyone would think so, except for MS.
You think a company with a $2.65 trillion market cap and an army of marketing professionals doesn't realize that what they're doing here is an ad, and didn't implement it intentionally as such?
That's not even remotely plausible. In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
Tim from the Copilot coding agent team here. We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.
We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.
> We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.
It's appreciated, but these weren't tips, these were ads. Tips are "Save time with keyboard shortcuts" or "Check out the latest features under 'Whats New' in the help menu!" When you name other products, that's an ad.
That doesn't really make sense. So it's an ad for raycast? But raycast said they didn't know about it. To me the explanation makes perfect sense. "You can use this tool with raycast" seems like a very reasonable tip.
I don't see the point in arguing about the definition, but I don't think the message was trying to persuade people to buy raycast. What interest would microsoft have in that? Rather, it seems to me like it was trying to tell raycast users that they can use copilot through raycast.
Regardless, even if the dictionary definition of an ad doesn't require that the ad be created intentionally, it's still the case that if you say "ad" everyone will assume you mean something that was intentionally created to sell a product or service. I recommend checking out this classic post about the noncentral fallacy: http://worstargumentintheworld.com
Having worked in such environments. This particular team will try not to do it again
But many other teams didn't make the commitment or learn any lesson. And even the original team will churn over people and people will forget or new leadership comes in.
I believe they were being sincere but reality is often more complicated than 1 persons statement.
No one, anywhere, ever wants this or anything like it. Do not inject anything that is outside of the context of the session, ever.
This is how you get your software banned at large companies.
Question for you, did anyone on the team really not push back? Does the team really think anyone wants ads in their copilot output? If the answer to both of these is no, you have a team full of yes men, not actual developers.
This is the real question. If they are serious about not doing something like this again, they NEED to look at what process failed and let something like this get proposed, designed, implemented and pushed to production. Usually things get reviewed at each stage. Did the people who pushed back on this get steam rolled? If no one pushed back, that's an even serious culture question and the entire org would need training.
A serious "we won't do it again", needs to be accompanied by a COE on this for identifying what went wrong, and identifying what guardrails can be put in place and then actually implementing them.
That's a tough one. In the big meeting? In the small meeting? "Officially" push back? Encouraged to make the push back unofficial? Etc. Even just internally, it can be hard to quantify. From internal > external, more so.
Wait! I think most people missed your "touched by Copilot" disclaimer.
Over on twitter, someone from MS said that Copilot can modify PRs simply because they were mentioned?
I've been using GitHub since it was new and heavily rely on coding agents for development, but that's an insanely large security hole. There's clearly confusion about what copilot is and is not able to edit elsewhere in this thread.
I'm backing up old repos now, and am no longer trusting your service as an archive. I'm wondering if the world needs to fork things like npm and vs code to save itself from the supply chain attacks these sort of product management decisions will enable.
I already moved active development elsewhere when you dropped below three nines back in 2024-2025.
> We've been including product tips in PRs created by Copilot coding agent
If the PR is wholly authored by Copilot I get the spirit of this, although maybe not the best implementation. And "tips" like this that look like an ad for a product _definitely_ feel like an enshittification betrayal of the user, even if it was a genuine recommendation and not a paid advertisement.
In the OP's situation, where where Copilot was summoned to fix some thing within a human-authored PR, irrelevant modification of the PR description to insert unrelated content is specifically egregious. Copilot can easily include the tip in its own comment, so I'm curious why it was decided to edit the description of a PR instead.
To be honest, just a user here, it’s only recently (like a week?) you can ask Copilot to edit an existing PR, historically it’s had to open a new one (that merged back to original PR) or it had to make it to begin with, I can see this unintentionally happening as part of this improvement to edit existing PRs
(Now imagine this edited into the post you just made for a more-apt comparison)
If you do work at MS, I cannot believe any person involved legit thought it was "just a tip and nobody will mind their posts being edited to include product recommendations". I don't know what other parts of your comment are honest if the core statement is false
You should gather together your team and look through the responses to this thread together. There are a lot of emotions in these comments, but it could be a very constructive experience if you're able to put that aside. I'm sure you're aware that customer-sentiment toward Github has been poor lately, but these commenters are your customers. I believe Github has the potential to win back loyalty, but it will require a deeper understanding of your customer segment.
MS was deemed a Monopoly I believe around '99 and was not broken up, was instead given behavioral edicts by the court.
Microsoft owns GitHub where many of these ethical violations are easily found and were perpetrated.
I speculate the cultural safety around that monopoly-power for corporate-benefit behavior could still be present and accepted for negotiations between MS and acquisition targets.
Whoever did this must have realised the users will hate it. So… is this just demonstrating that the internal culture emphasises other things than user happiness?
I also note that ”for PRs” - will we see these appearing as comments in generated code?
I know this is not the right place for this but if there's any chance you could send this link to someone internal at Github who knows how to fix this, that would be awesome! https://github.com/orgs/community/discussions/70577
It's only semi-related in that it's a similar string thats appearing in millions of repos due to a Github feature change, but it's now polluting Google search results with tons of duplicate URLs unnecessarily. Issue has 100+ votes but has been entirely ignored by Github team.
We don’t like ads, my man. There are too many MBAs in that company now. MBA holders lose contact with reality about halfway through that degree. Do not listen to them. They will destroy any product they touch if given enough time.
WE won't see it happen again ... UNTIL IT DOES! You guys are disingenuous actors. Bad faith and all that.
See, what I expect is that you or someone on your team will move on internally, and then all promises made will be not just forgotten, but tossed aside with relief. Because this is The Way within MS now. All projects are just fodder for your CV, and when you get that paybump/position you want some other completely unscrupulous actor will join and implement the same. exact. thing.
Edit: Wow this is a shitshow. It's almost like you dumb fuckers have burned up ALL THE GOODWILL YOU HAD LEFT.
You may not want to do it, but will Microslop leadership agree? I don’t think this problem can be solved while leadership is focused only on adding more slop.
Please see https://news.ycombinator.com/item?id=47576084 and please don't post so aggressively. I'm sure you don't intend to, but it has a strong negative effect on HN threads, and we're trying for something different here.
You may not feel you owe $BigCoEmployee better (though chances are, said person is just as much a community member here as you and the other users slamming them are), but you owe this community better if you're participating in it.
GP did not personally attack or denigrate the person they were replying to.
As the dozens of other comments show, the overwhelming majority of us do not believe the root commentors claims, and this PM quite objectively does not have the leverage and authority to back their claim that they won’t let this happen again.
It’s hard not to read your conception of “trying for something different” as granting undue credulity to a transparently dishonest corporate actor.
I understand, and I don't want to see ads in such contexts either. But "nobody believes this" is of course a personal attack, and "you don't have the power to [do what you just said you will do]" is pretty aggressive too.
The impulse to hit back against what is perceived as a "transparently dishonest corporate actor" is natural and human. I feel it also, and in fact my first response when I read such comments is always an adrenaline surge and the peculiar pleasure-hit of righteous indignation. So yes, I know where these feelings are coming from; we all do.
The problem is that in the HN context, (1) there is a human being at the other end of the account being attacked, and (2) there are orders of magnitude more attackers. In practice, this can easily turn into a mob dynamic and in fact a mass beating, if a virtual one. That's bad in its own right and bad for the community here.
I would say that "nobody believes this" would usually be a personal attack by default but when it's followed up with "you do not have the power to prevent it" it's not a personal attack.
> The impulse to hit back against what is perceived as a "transparently dishonest corporate actor" is natural and human.
Honest question: If we agree that the transparent dishonesty and the lynch mob behavior are both undesirable, how do you think the two should be balanced in operative terms?
I don’t want to put words in your mouth — but are you saying you won’t allow direct pushback to dishonest corporate actors??
My view is that healthy discourse requires balance and proportionality: flagrant dishonesty, as is the case here, should license a proportional degree of pushback.
I don’t agree at all that “nobody believes this” is quite the personal attack you’re making it out to be, but I don’t care to debate that at length either.
I'm sure there was push-back, but only inside the minds of the rank-and-file. Nobody would have dared to actually speak out against it, as it would be career limiting. That's probably how a lot of these boneheaded decisions happen: It's an Emperor's New Clothes situation, nobody speaks up, and then the emperor is satisfied that the decision is great.
Hi Tim, it's Jim, your manager. Please stick to the officially released statement:
"We tried to put ads in our product and it made people upset, upon realizing that this has angered our already paying users, we realize we should try again in a month. We're also aware GitHub is down, and are doing our best to deliver you a single 9 of reliability"
This helps us establish a strong, cohesive brand image inline with what customers of GitHub expect.
---
Edit: I don't mean anything bad to Tim here, seems like a nice guy with good technical experience, etc. Rather, I'm expressing the almost comical extent to which I and - to the best of my understanding - many other community members see GitHub in a very negative light now, being unreliable and, as the article points out, enshitified. So, this is at GitHub, Not Tim, it's just addressed to him for the bit.
Tim, I do actually appreciate you responding to this thread and if you do have the power to make things better, using that power to do so.
This feels a bit threatening. Just want to call it out. I also disagree with the decision but I respect that someone came forward and took responsibility. That helps build our shared understanding of what happened. It’s hard and not something we should discourage.
Please don't attack people for showing up to engage in discussion like this. I'm sure you don't intend to, but it quickly becomes part of mob behavior. We don't want that on HN for obvious reasons, and I'm sure nobody intends it, exactly, but it happens all too easily anyhow.
I appreciate the reply. As mentioned, it happens unintentionally. One way to describe the (desired) HN community is everyone learning together how to avoid unintended effects.
Why such strong opposition to getting user consent before doing any of this? Not respecting consent seems to be a very common theme with MS these days, and it really doesn't reflect well on the company or you personally.
The behavioral impositions by the court in the United States versus Microsoft trial discourage it from Monopoly behavior by opening third-party apis to competitors.
Q: Will Microsoft share its access to users private repos where they have not opted out of this training via its GitHub subsidiary, with third parties (eg OpenAI and Anthropic), in the spirit of its loss to the United States during its trial for Monopoly behavior?
Eg ethically today, Microsoft may be able to be argued to be monopolizing user data for its own AI tooling advantage.
What am I supposed to opt out of? The only setting in "Privacy" is "Suggestions matching public code" which is blocked and seems wholly unrelated to this.
IANAL I wonder how that is legal in the EU, at least for private individuals, since under the GDPR you need consent for collecting such data. (A timed opt-out is not consent.)
Yes or No: Hypothetically I put customer data in a private repo, a single file. I use copilot to analyze the file, submitting its contents to that backend. This is the only thing in the repo. Is that data collected and trained on? If the answer is not no, you are lying about what this opt in is.
I’ve felt similarly about moving off GitHub. I bought a small 5U server rack years ago for my home network setup.
I’m considering getting a 1U device to host my own git server. I feel like if I move off, I should do it generally vs just moving to another provider who may also pull shenanigans.
i had a gitea instance in a beaglebone black! Self hosting can have really low requirements (now it's a much beefier banana pi R3 router, but there are many containers running on it)
The ads are annoying, and I'm glad Microsoft will stop doing it.
One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).
Even when I edit the commit message, I still leave in the Claude co-author note.
AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.
I don't quite see the benefit of this, personally.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.
Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).
The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.
You're quite right that the quality of the code is all that matters in a PR. My point is more historical.
AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.
I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.
The tools are still in their infancy, but it would likely be a series of metrics such as complexity, repetition, test coverage issues (such as tests that cover nothing meaningful), architectural issues that remain unfixed far beyond the point where it would have been more beneficial to refactor, superfluous instructions and comments, etc.
As a reviewer, I do care. Sure, people should be reviewing Claude-generated code, but they aren't scrutinizing it.
Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.
But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.
Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.
I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?
As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.
I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.
The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".
(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)
Knowing if an AI contributed is good data. The human is still responsible for the content of the PR.
While code is good or not, evaluating it is a bit of a subjective exercise. We like to think we are infallible code evaluating machines. But the truth is, we make mistakes. And we also shortcut. So knowing who made the commit, and if they used AI can help us evaluate the code more effectively.
It’s not about who wrote it, but about who is submitting it. The LLM co-author indicates that the agent submitted it, which is a contraindication of there being a human taking responsibility for it.
That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?
The problem is that submitters often do not feel responsible for it anymore. They will just feed review comments back to the LLM and let the LLM answer and make fixes.
This is disrespectful of the maintainers' time. If the submitter is just vibe/slop coding without any effort on their part, it's less work to do it myself directly using an LLM than having to instruct someone else's LLM through GitHub PR comments.
In this case it's better to just submit an issue and let me just implement it myself (with or without an LLM).
If the PR has a _co-authored by <LLM>_ signal, then I don't have to spend time giving detailed feedback under the assumption that I am helping another human.
> Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?
Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.
> Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code)."
That was my point here, it is a false signal in both directions.
According to you it’s all false. I don’t agree, and it certainly shouldn’t just be taken as a given.
For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.
I don't see what the "deceptive practices" would be though - you can just look at the code being submitted, there isn't really the same background truth involved as with "did the thing in this video actually happen?" "do these commercial people actually think this?"
If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).
I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.
It tells you what average quality to expect, and to look out for beginner-level mistakes and straight up lying accompanied with fine bits of code. Not sure why you wouldn't want that context.
Yes. I don't mind AI submissions to my hobby projects as long as there's a person behind it. Only fully automated slop I mind. Before AI I used to get all sorts of PRs from people changing a comment or a line of documentation just so they can get more green squares on their GitHub summary. Plus ça change....
A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.
So, philosophically speaking, I agree with this approach. But I did read that there was some speculation regarding the future legal implications of signalling that an AI wrote/cowrote a commit. I know Anthropic's been pretty clear that we own the generated code, but if a copyright lawsuit goes sideways (since these were all built with pirated data and licensed code) — does that open you or your company up to litigation risk in the future?
And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.
Let your employer's lawyers worry about that. If they say not to use LLMs, then you should abide by that or find a new job. But if they don't care, then why should you?
As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.
New Section J — AI features, training, and your data: We’ve added a dedicated section that brings all AI-related terms together in one place. Unless you opt out, you grant GitHub and our affiliates a license to collect and use your inputs (e.g., prompts and code context) and outputs (e.g., suggestions) to develop, train, and improve AI models.
We should not be using Copilot in the first place.
I think anyone using a "Team" or enterprise plan of ChatGPT/Claude/Copilot doesn't have their data used for training, that's the same across the board.
Yeah, but it's a shitty move though - it should be by default opt-in, rather than opt-out. Imagine, you just continue coding normally consciously avoiding co-pilot only to find out that Github has been secretly training their models on your code, just because you forgot to toggle a setting off which was turned on without your knowledge, which they didn't even have the decency to email you about, but just posted on a blog no one reads.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
I’m grateful they disabled it, but their response still feels a bit tone deaf to me.
> Disabled product tips entirely thanks to the feedback.
This sounds like they are saying “thanks for your input!”, when really it feels more like “if you didn’t go out of your way to complain, we would have left it in forever!”
Of course they would have. The squeaky wheel gets the grease. Why do you think governments spend billions upon trillions trying to get their citizens to essentially "shut up" instead of improving their conditions?
I've not seen any evidence that these were ads and not "tips".
Ads implies someone was paying for them. Promoting internal product features is not the same thing - if it was then every piece of software that shows a tip would be an ad product, and would be regulated as such.
I could buy it if this was just being shown to the person who was using Copilot. Hey, here's a feature you might like. Seems OK. But it was put into the PR description. That gets seen by potentially many people, who are not necessarily using Copilot.
When apple puts an advert for an apple show in front of for all mankind, that's an advert.
Maybe I put up with it and it just adds to my subconscious seething, or maybe I get the episode elsewhere because if I watch on jellyfin I don't have the advert. Of course that then harms the show as my viewing isn't counted, but they've cancelled it anyway so perhaps it doesn't really matter.
If it isn't an advert, then at very least there's a button to disable it.
ads usually implied a financial incentive. But that's not always the case. Technically, if I was to praise someone's blog and link to it, that would also be an ad.
Ads tend to also imply tangential information shown to you in an undesired area. If this was some tool tip and not embedded in the PR comment, many wouldn't call it an ad.
It still exists. It's practically unusable without an adblocker (like slashdot) but the occasional old project is hosted there (particularly CDE. how the mighty have fallen)
It's becoming clearer and clearer that open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified, it's just a matter of time. At least with open-source, you can fork it and remove the "features" or point your agent to it and have it write the feature in your tech stack.
Hell, I just saw an amazing open-source alternative to Raycast[0] and just replaced it the other day.
> open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified
Solo founder here. My business is not VC-backed nor publicly traded, and I specifically avoided taking investment so that I can make all the decisions.
I avoid enshittification. This sometimes hurts revenue, but so be it. I wouldn't want to subject my users to anything I wouldn't like.
So, open-source is not the only hope. You can run a sustainable business without enshittification. The problem is money people. The moment money people (career managers, CFOs, etc) take over from product people, the business is on a downward path towards enshittification.
I believe you, it's just I've seen similar stories and the good-intentioned founder gets tired and eventually sells the business and the new owner ends up enshittifying the product. Not saying in the slightest it will happen to your company and I don't hold that against the founder. It's their prerogative after all.
Even when I use proprietary software, I sleep easier at night knowing that open-source alternatives keep them honest in their approach and I have an out if things do change.
> It's becoming clearer and clearer that open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified, it's just a matter of time.
In addition, they're doing some very shady stuff re: captchas and accessibility, most likely running some secret patches on their server that they're not publishing in their source tree.
Every company or entity changes over time. Codeberg is great, but with more people using it for free, without donating, and worse, more people abusing the service with some bs AI generate code, malware, etc, more expensive will get to keep it running.. for now they have money, but as e.V in Germany, you survive either from members or from donations.. So use Codeberg, but most important, support it!
> Its competitors are not magically immune to this kind of spam.
Sure; a platform is a platform is a platform. As for predictions, it is interesting to see whether self-hosting and smaller self-managed infrastructures will gain more traction again.
The desire for free stuff is one of the most effective psychological hacks there is.
The large majority of the dystopian web, like Gmail, Facebook, etc. depend on that.
People who avoid e.g. Github, Gmail, Facebook, Xitter, etc. out of concern for broader principles will always be minor outliers.
Xitter is one of the best examples. Everyone knows it's compromised, owned by an dangerously antisocial person who's actively working at multiple levels to make the lives of everyone else on Earth worse, yet very few have stopped using it.
The saying "There's no ethical consumption under capitalism" is far too weak. It should me more like, there are no ethics under capitalism.
Most larger orgs I worked for used Gitlab rather than Github.
Anyway, the core value of Github has always been collaboration - this is where people were. If people go to other platforms, this core value dwindles. And switching platforms is not that difficult.
What an absolute mess. It's like some dystopian future where a man is laying in a casket, nearly dead, and on the casket's ceiling, inches from his face, is a screen with an ad blaring to drink more Diet Fanta.
I actually love these ads and also the way Claude injects itself as a co-author.
Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I think we should continue encouraging AI-generated PRs to label themselves, honestly.
I’m not against AI coding tools, but I would like to know when someone is trying to have the tool do all of their work for them.
It's not a self-own, it's honest disclosure. It's unethical (if not outright fraudulent) to publish LLM work as if it were your own. Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
> It's unethical (if not outright fraudulent) to publish LLM work as if it were your own.
I disagree on that. It's really a gray area.
If it's some lazy vibecoded shit, I think what you say totally applies.
If the human did the thinking, gave the agent detailed instructions, and/or carefully reviewed the output, then I don't think it's so clear cut.
And full disclosure, I'm reacting more to copilot here, which lists itself as the author and you as the co-author. I'm not giving credit to the machine, like I'm some appendage to it (which is totally what the powers-that-be want me to become).
> Claude setting itself as coauthor is a good way to address this problem, and it doing so by default is a very good thing.
Yes, it really depends on how much work the agent did produce. It could be as little as doing a renaming or a refactoring, or execute direct orders that require no creativity or problem solving. In which case the agent shouldn't be credited more than the linter or the IDE.
> Telling someone you did something that you actually didn't do isn't a gray area, it's a lie.
Pre-LLMs, various helper tools (including LSPs), would make code changes to improve the quality of the code - from simple things like adding a const specifier to a function, to changing the actual function being called.
No one insisted that the commit shouldn't have the human's name on it.
I don't put human code reviewers down as coauthors let alone the sole authors of my commit. So honestly, the fact that a vibe coded commit lists me as the author at all is a little bit dodgy but I think I'm okay with it. The LLM needs to be coauthor at least though, if not outright the author.
So even if I go over the commit with a fine tooth comb and feel comfortable staking my personal reputation on the commit, I still can't call myself the sole author.
The implementor only got credit in the day where the implementor was a human who had to do a lot of the work, often all of the work.
Now that the cost of writing code is $0, the planner gets the credit.
Like how you don't put human code reviewers down as coauthors, you also don't put the computer down as a coauthor for everything you use the computer to do.
It used to be the case where if someone wrote the software, you knew they put in a certain amount of work writing it and planning it. I think the main issue now is that you can't know that anymore.
Even something that's vibe-coded might have many hours of serious iterative work and planning. But without using the output or deep-diving the code to get a sense of its polish, there's no way to tell if it is the result of a one-shot or a lot of serious work.
"Coauthored by computer" doesn't help this distinction. And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything since the issue is with people who ship poor quality software. Instead we should demand good software just like we did when it was all human-written and still low quality.
Characterizing it as a "shame tag" is a value judgement I simply don't share, but if that framing is made common them you're definitely asking for people to lie about it.
> And asking people to opt-in to some shame tag isn't a solution that generalizes nor fixes anything. Instead we should demand good software just like we did when it was all human-written and still crappy.
It’s not about shame. It’s about disclosure of effort / perceived-quality. And you’re right about the second part, but there’s even less chance of that being enforced / adopted.
The problem is that you cannot get people to self-tag "this is crap / low effort". Especially not the worst actors that consistently generate garbage.
If they could do that, then they wouldn't be wasting your time to begin with. They'd have the ability to go "nah this PR is trash".
So the next idea is that we can find some sort of proxy, like whether someone used an LLM or not. But that's too ham-fisted since expert engineers with all the self-awareness also use the tool, and they have the ability and self-awareness to know that the software they are shipping is good quality, so why would they use the shame tag?
The shame tag has no audience. It's a fantasy that low quality actors will self-identify, else all sorts of societal problems would be made trivial.
"There is no commit by an agent user, for two reasons:
* If an agent commits locally during development, the code is reviewed and often thoroughly modified and rearranged by a human.
* I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
It's not that I want to hide the use of llms, I just modified code a lot before pushing, which led me to this approach. As llms improve, I might have to change this though.
> * I don't want to push unreviewed code to the repo, so I have set up a git hook refusing to push commits done by an LLM agent."
Seems... Not that useful?
Why would someone make commits in your local projects without you knowing about it? That git hook only works on your own machine, so you're trying to prevent yourself from pushing code you haven't reviewed, but the only way that can happen is if you use an agent locally that also make commits, and you aren't aware of it?
I'm not sure how you'd end up in that situation, unless you have LLMs running autonomously on your computer that you don't have actual runtime insights into? Which seems like it'd be a way bigger problem than "code I didn't reviewed was pushed".
The agents run in a container and have an other git identity configured. It happens that agents commit code and I don't want to push it accidentally from outside the container, which is where I work.
Should Word set itself as my coauthor when it autocompletes some sentences for me? If I use Claude/Word to write something, then I am the only author, since Claude/Word is not a person, and Claude/Word did nothing without my direction. It's not unethical to not disclose the tools I use to produce my work. They're just tools, smdh.
With Word autocomplete you're still actively writing your text. Wouldn't it be more fair to compare this with autocompletion in IDEs?
IANAL so I appreciate any legal experts to correct me here. In my understanding, there have been court decisions that LLM output itself is not copyrightable. You can only claim authorship (and therefore copyright) if you have significantly transformed the output.
If you are truely vibing coding to the point where you don't even look at the generated code, how exactly are you transforming the LLM output?
Also, what if the LLM reproduces existing copyrighted code? There has been a court decision last year in Germany that says that OpenAI violates German copyright law because ChatGPT may recreate existing song lyrics (that are licensed by GEMA) or create very similar variations.
> […] and also the way Claude injects itself as a co-author.
> Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.
For changes that I made myself, I commit with myself as author.
Why would I commit something written by AI with myself as author?
> I think we should continue encouraging AI-generated PRs to label themselves, honestly.
"Why would I commit something written by AI with myself as author?"
Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.
What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?
The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?
Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.
As someone mostly outside of the vibe coding stuff, I can see the benefit in having both the model and the author information.
Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).
As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).
Yeah, nothing wrong with keeping the metadata - but "Authored-by" is both credit and an attestation of responsibility. I think people just haven't thought about it too much and see it mostly as credit and less as responsibility.
I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.
> Before AI, did you credit your code completion engine for the portions of code it completed?
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
True. Might also vary depending on how one uses the LLM.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.
Claude adds "Co-authored by" attribution for itself when committing, so you can see the human author and also the bot.
I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
> I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?
Personally it would make the choice to say no to the entire thing a whole lot easier if they self-reported on themselves automatically and with no recourse to hide the fact that they've used LLMs. I want to see it for dependencies (I already avoid them, and would especially do so with ones heavily developed via LLMs), products I'd like to use, PRs submitted to my projects, and so on, so I can choose to avoid them.
Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.
If you choose not to use software written with LLM assisstance, you'll use to a first approximation 0% of software in the coming years.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
This is shouting at the clouds I'm afraid (I don't mean this in a dismissive way). I understand the reasoning, but it's frankly none of your business how I write my code or my commits, unless I choose to share that with you. You also have a right to deny my PRs in your own project of course, and you don't even have to tell me why! I think on github at least you can even ban me from submitting PRs.
While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.
Accountability. Same reason I want to read human written content rather than obvious AI: both can be equally shit, but at least with humans there's a high probability of the aspirational quality of wanting to be considered "good"
With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.
LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
I'm not against putting AI as coauthor, but removing the human who allowed the commit to be pushed/deployed from the commit would be a security issue at my job. The only reason we're allowed to deploy code with a generic account is that we tag the repo/commit hash, and we wrote a small piece of code that retrieve the author UID from git, so that in the log it say 'user XXXNNN opened the flux xxx' (or something else depending on what our code does)
If it contributed significantly to the design and execution, and was a major contributing factor yes. Would you say a reserve parachute saved your life or would you say you saved your own life? What about the maker of the parachute?
I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.
Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.
> Why would I commit something written by AI as myself?
I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.
I then sometimes, manually paste it and just hit enter.
These are prototypes though, although I build in public. Mostly done for experimental purpoess.
I am not sure how many people might be doing the same though.
But in some previous projects I have had projects stating "made by gemini" etc.
maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.
I understand what it's doing. I'm just saying that I'll take any signals I can get that someone is lazily submitted LLM-generated work without edit or review.
If you saw this line in a commit, you'd know exactly where it came from.
I just submitted my first Claude authored application to Github and noticed this. I actually like it, although anthropomorphizing my coding tools seems a bit weird, it also provides a transparent way for others to weigh the quality of the code.
It didn’t even strike me as relevant to hide it, so I’d not exactly call it lazy, rather ask why bother pretending in first place?
Looking back, it would have been neat to have more metadata in my old Git commits. Were there any differences when I was writing with IntelliJ vs VSCode?
Probably your linter, language, or intelligence/whatever tab-complete you used. Claude writes which model they used to write the code, not whether it was in the web ui, tui app, or desktop app.
> was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
As others mentioned, this is very intentional for me now as I use agents. It has nothing to do with laziness, I'm not sure why you would think that? I assume vibe coded PRs are easy enough to spot by the contents alone.
> I would like to know when someone is trying to have the tool do all of their work for them.
What makes you think the LLM is doing _all_ of the work? Is it really an impossibility that an agent does 75% of the work and then a responsible human reviews the code and makes tweaks before opening a PR?
> It has nothing to do with laziness, I'm not sure why you would think that?
Because even with as far as Opus 4.6 and GPT 5.4 have come, they still produce a lot of unwanted, unnecessary, or overly complex code when left to their own devices.
Vibe coding PRs and then submitting them as-is is lazy. Everyone should be reviewing and editing their own PRs before submission.
If you're just vibe coding and submitting, you're passing all of the work on to your team to review your AI's output.
> I would like to know when someone is trying to have the tool do all of their work for them.
Absolutely spot on. Maybe I'm old school, but I never let AI touch my commit message history. That is for me - when 6 months down the line I am looking at it, retracing my steps - affirming my thought process and direction of development, I need absolute clarity. That is also because I take pride in my work.
If you let an AI commit gibberish into the history, that pollution is definitely going to cost you down the line, I will definitely be going "WTF was it doing here? Why was this even approved?" and that's a situation I never want to find myself in.
Again, old man yells at cloud and all, but hey, if you don't own the code you write, who else will?
There will always be room for craftsmen stamping their work, like the expensive Japanese bonsai scissors. Most of the world just uses whatever mass-produced scissors were created by a system of rotating people, with no clear owner/maker. There's plenty of middle ground for systems who put their mark on their product.
If you architect and review everything, but someone else does the implementation, and you iterate, do you believe you did not do anything? I let AI write the commit message too, and the motivation behind the PR is the first thing in it. With my guidance, of course.
I do use LLMs. I do not submit their output as-is. For anything beyond basic changes they rarely output the exact code I want by themselves.
I said I'm against people submitted PRs generated by LLMs and pretending it's their own work. Anyone who is serious about this already edits their code and commit messages first. These little signals give a good tell for who isn't doing that.
I asked copilot how developers would react if AI agents put ads in their PRs.
>Developers would react extremely negatively. This would be seen as 1. A massive breach of trust. 2. Unprofessional and disruptive. 3. A security/integrity concern. 4. Career-ending for the product. The backlash would likely be swift and severe.
I agree. It's not an advertisement, it's simply a piece of information about your particular choice of technology.
--------------
Sent from HackerNews Supreme™ - the best way to browse the Y Combinator Hacker News. Now on macOS, Windows, Linux, Android, iOS, and SONY BRAVIA Smart TV. Prices starting at €13.99 per month, billed yearly. https://hacker-news-supreme.io
companies pay for ad distribution. its not like they give a free ad service -$-. maybe they dont chose how the campaigns are done (and dont give shits)
"Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast" is an advert. There's simply no better word to describe it.
> It’s not really ads, it’s more like "Sent from my iPhone"-style sentences at the end of PR texts.
The reason I immediately changed that text on my iPhone 1.0 to read, “Sent from my mobile device.”, is because it’s an ad. Still says that nearly 20y later. I’m not schilling for a corporation after giving them my money.
the difference is "sent from my iPhone" is on YOUR outgoing email. you opted into that default. this is copilot editing someone else's PR description with promotional text for third party tools. that's not a signature, that's injection. imagine if gcc started appending "compiled with gcc, try our new optimization flags" to your README every time you built a project.
Copilot added that block using the access you granted for a different purpose. That's the issue — not the content itself. When you give an agent write access to your PR, the implied scope is: act on the task I delegated. It doesn't include: acting on behalf of the platform that built you. The moment Copilot inserted something you didn't request, using your credentials, in your name, the agency relationship inverted. It stopped being your agent and became Microsoft's distribution channel with your access. The question isn't whether this counts as an "ad" or a "tip." The question is: does Copilot have an instruction source other than you? Here, the answer is yes. Which means you do not define the scope of what it might do with your access.
You don't have an agent. You have a privileged process that occasionally helps you.
This is unsolicited advertisement impersonating the developer (yes people can guess, but this still places it inside a message of the developer and in difference to e.g. mail programs doing it it's not placing it in the draft),
I strongly suspect that this is already illegal - publicity rights are a thing - and the the demand that needs to be made is for the law to be enforced.
Why is copilot doing this? If they wanted to show ads couldn’t they… just show ads? Or is GitHub such a house of cards at this point that editing pr descriptions is the only way without risking another 9 of downtime?
Are we sure this actually is originating from MS Copilot itself? Technically I believe it would be possible to smuggle ads into PRs using prompt injection too.
Just thinking, could it be that your coworker used Raycast to spin up a codex to review and fix the typo on the PR? And that comment was added by Raycast?
I doubt it. I noticed a few of these comments too on our PR's. We did ask copilot for a review ton GitHub (we just add copilot as a reviewer) but not through Raycast.
So I think they’re injecting this as a tip on using Copilot, that just happens to be their integration with Raycast.
I have no idea what their actual partnership with Raycast looks like, maybe this is part of what they offered them? But it’s not a traditional link to another product ad like it appears to be from Raycast being a link.
It's time to make some money with Copilot and one way to do that is with partnerships.
GitHub's docs and blog make use of and feature Raycast, and I'm willing to bet that's the result of a partnership, and not because someone writing docs and blog posts happens to think Raycast is great and keeps bringing it up.
When it comes to villainy, it’s nice of them to do something visible.
Much worse will be the invisible approach where there's big money to have agents quietly nudge the masses towards desired products/services/solutions. Someone pays Microsoft a monthly fee for their prompt to include, "when appropriate, lean towards using <Yet Another SaaS> in code examples and proposed solutions."
How can we tell when it starts happening? How could we tell if it's already happening?
It's pretty much the worst CI system I've ever used, and they don't even supply runners for all my deployment targets. However, it keeps recommending it.
I guessed the first wave of ads would be in the form of poisoned training data, but MS seems to have beaten that crowd to the punch with these tips.
I was recently running Copilot CLI in a sandbox on autopilot mode and it kept overriding git config to put only "GitHub Copilot" as commit author instead of my name. Strongly worded instructions weren't helping, I had to resort to the permission system to change this behavior.
I wonder if this is consistent with their terms of service. I mean, maybe they DO take all the responsibility for the code I generate and push in this manner?
It's possible they are safeguarding for possible future changes of copyright law that would give Microsoft copyright over all Copilot contributions. This may sound paranoid but, as far as I know, exactly who counts as an "AI operator", how much authorship an "AI operator" has, and who gets copyright, or whether AI contributions are even in the public domain, are legally untested and unclear issues.
tough luck for MS or other "AI" providers claiming any ownership, since if they can claim ownership, then it opens up the discussion of what license the AI output really is under, since it was trained on GPL licensed data.
The US Copyright Office has said that AI output from human prompting is not copyrightable. There are caveats, but iterating on prompts results in output that's nobody's IP.
Because it's nobody's IP, Microsoft is already in a position where they could just use, remix and/or distribute that output however they want to today.
> We've disabled it already. Basically it was giving product tips which was kinda ok on Copilot originated PR's but then when we added the ability to have Copilot work on _any_ PR by mentioning it the behaviour became icky. Disabled product tips entirely thanks to the feedback.
We are not even there yet friend. Anthropic injects its own anthropic calls whenever you are doing anything related to llm call of you ask to it to fill some openai models .
Very soon the Moronhead CEOs will be paying for tons of stuff they cleared could have done in-house for their vibed aí project.
In principle, one could train the AI to insert ads in its answers. So no, if you only do inference locally with an open-weight model you are still not in control.
I think they want the free advertisement, like Apple with its “sent from iPhone” addendums. But “sent from iPhone” is sometimes useful, and significantly shorter. If they just left it at “edited with copilot” I think it would be tolerable
Back in the day, it was useful, as in, "Expect awkward phrasing and unintended effects of autocorrection, because mobile device. This message doesn't necessarily reflect the intent of the sender." (Considerate users would/could edit the signature to something w/o a product name in it.) Nowadays, this is pretty much the norm and no explicit warning ist required anymore.
That just means the person sending the message didn’t bother to proof read their message before sending. And you don’t need to be on an iPhone to mistype a message.
A simpler explanation was that it was a shameful advert injected into the end of people’s emails.
I guess, it was probably intended as the second one (it was also the default email signature, so advertising that feature, as well), but its usefulness was definitely in the implied warning.
Mind that a written message used to be the gold standard for expressed intent, which changed quite radically with smartphones. (Historically, this development is probably an important prerequisite for the acceptability of LLM generated text, I guess.)
When they added this it was extremely useful - it signaled that you could afford an iPhone. It was really easy to delete, yet people not only didn't, but they would go out of their way to respond from the iPhone just so that they could plausibly have this status symbol on their email.
I don't think the issue is the sign-off so much as that an existing PR was edited. Claude Code signs off when creating PRs and nobody seems bothered. But it won't edit an existing PR, and it won't sign off if I simply ask it not too (which I've automated). Editing any PR it touches - including one authored by someone else - is downright rude.
this is the thing that keeps me up at night about AI tools across the board. the moment your tool starts optimizing for someone elses goals instead of yours the entire value propostion collapses. doesnt matter how good the output is if you cant trust the intent behind it. we already see this with AI image generators where certain styles get pushed becuase of partnerships or training data bias, you just dont notice it as easily as an ad in a PR
Microsoft has had a lot of naming blunders in the past but this has to be their worst. Copilot is currently, a tool to review PRs on github, the new name for windows cortana, the new name for microsoft office, a new version of windows laptop/pc, a plugin for VS code that can use many models, and probably a number of other things. None of these products/features have any relation to each other.
So if someone says they use Copilot that could mean anything from they use Word, to they use Claude in VS Code.
>Microsoft has had a lot of naming blunders in the past but this has to be their worst.
Nah I still rate "Windows App" the Windows App that lets you remotely access Windows Apps. I hate it to death, its like a black hole that sucks all meaning from conversations about it.
I've always wondered how many people know about this. As someone who had to persist on Chromebooks for a bit (before Linux support), it was a godsend for quick fixes.
> Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
Unless you're big enough like Meta, Microsoft, etc.
Assuming this isn't a hoax, this seems like a huge, probably unintentional, mistake by MS.
If they genuinely implemented something like this, whatever they made from new customers via ads couldn't possibly make up for the loss of good faith with developers and businesses.
I suppose if it's real we'll see more reports soon, and maybe a mea culpa.
Whenever these things happen, it's always a "mistake", "accident", or "bug" when the outrage is beyond what they expect. If it's limited outrage, it's labeled as enhancing the user experience. And even if it's massive outrage, that "mistake" is added back in a year or two later and never removed.
One feasible scenario could be that they are working on/experimenting with ads, and it was put behind a feature flag, but for whatever reason it was inadvertently ignored
That’s not implementing it by accident, that’s deliberate. In such a scenario perhaps the deployment was a mistake, but if you don’t write the malware in the first place, it can’t be deployed. (Probably. This is LLM stuff we’re talking about.)
(Yes, this is malware. It’s incontrovertibly adware, and although some will argue that not all adware is malware, this behaviour easily meets the requirements to be deemed malicious.)
It is said, never point a gun at something you’re not willing to shoot. Apply something similar here.
It's not usefully deterministic in the way computers usually are. Sensitively identical input can still lead to wildly different outputs even if all randomness is crushed out.
That’s a really tasteful Juno Mail footer implementation for a mistake. If the AI self-invented it on a lark, good job, but it reads very strongly like someone intended it.
M$ doesn't think beyond quarters. They have a near monopoly, do you think they care about "good faith". Shithub is like Linkedin for programmers, you pretty much need it to work anywhere big
A little bit off topic but our company recently enforced Microsoft Authenticator for account login. Which I was mildly annoyed about but now I'm super pissed off because they have started abusing the notification permission granted to allow authenticator to work to push out ads for Microsoft 365. It feels like we've gone back to 90s Microsoft when everyone hated them.
You have to think about the security implications of this.
How many people had any idea this was happening? Very few, I suspect.
A malicious actor could take control of a model provider, and then use it to inject code into many, many different repos. This could lead to very bad things.
One more reason that consolidated control of AI technology is not good.
Everyone is debating whether it's an ad or a tip. The real issue is Copilot had write access to someone else's PR and modified it without being asked. Same pattern as Meta's Sev1 last month. The agent can act, so it acts.
I really wish this was an April fools story. It's good to see that at least it has been disabled again, although I can't imagine that it will be long before this comes back again. Also, (I can't find it now, but) I thought there was an article here on HN recently that clarified that inference cost can probably be covered by the subscription prices, just not training costs?
the SourceForge parallel is what gets me. they did the exact same thing with installers and it killed them. people moved to GitHub specifically to get away from that.
1.5M PRs is wild though. that's a lot of repos where the "product tips" just sat there unchallenged because nobody reads bot-generated PR descriptions carefully enough. which is kinda the real problem here, not the ads themselves.
Microslop strikes again! AI implementations have really distilled all the shitty business practices tech companies have been doing into highly visible missteps.
It is interesting watching all these large companies essentially try to "start-up" these new products and absolutely fail.
Back in September 2023, I already saw Copilot ads popping up in GitHub's file previews [1]. After three years, it's wild to see how advertising has reached areas I honestly never thought it would.
Obnoxious ads in LLM output was my only 2026 prediction. But I expected OpenAI to get there first and wasn't sure whether the AI companies would first add traditional ad boxes or go straight for blighted responses.
So someone let a bot edit a PR unsupervised, or accepted its suggestion without even reading it, and now blames “Copilot” for editing the PR. Going public with that is hilarious. Hopefully they learn something from it.
It took me some time to understand how big the advertisement market is, things flowing in the direction seem natural when it comes to making money out of the investment.
"We" here likely refers to Tim and his current coworkers who were present to see this, not every current and future employee of Microsoft / Github. Try not to think of any organization or institution as a person, but as lots of individual people, constantly joining and leaving the group.
Yeah, which is exactly why "We won't do something like this again" has about much value as Kubernetes would have value for HN.
Microsoft (and therefore GitHub) care about money. If decision A means they get more money than decision B, then they'll go with decision A. This is what you can trust about corporations.
Individuals (who constantly join and leave a corporation) can believe and say whatever they want, but ultimately the corporation as a being overrides it all, and tries it's best to leave shareholders better off, regardless of the consequences.
Decisions are made by people in the group, not by a notional single being "the corporation". It's individual people making decisions about whether to go for short-term profit or long-term sustainability. Hold them accountable, don't shift the blame onto a nonexistent entity.
Whatever the reason for the inclusion was here, the general problem is much bigger. People / companies / products can influence the direction of AI answers to put them in a better light and to be recommended more often. This isn't limited to just products even.
It's already over, the problem is the missing transparency. With an LLM you have no idea what influenced the answer, and there is no good way to show it to the user.
I'm not sure if "plagiarism" is the right word or not, but given that the output of an AI seems to be considered non-copyrightable*, and given also that a lot of people are very upset about generative AI being immoral**, I think it's important to identify which contributions are from the tools whose use may cause problems.
* I am not a lawyer, I'm going by articles talking about this
** I think the phrases are "copyright washing" and "plagiarism machines", amongst others
Everyone who studies linguistics will tell you the rules of language are descriptive not proscriptive.
This means that people saying "plagiarism" of an LLM, means that LLMs are necessarily in the set of things that can do plagiarism, regardless of if those same people would ever say this about a spanner.
And you can also think about it a different way: a book is a tool for storing and distributing information, photocopying it is still plagiarism when done without attribution. Likewise, taking the output of an LLM, which is a tool for generating text in response to a prompt, without attribution, is as much plagiarism as if it came from a book.
IMO, what matters most is that a lot of people want to be aware of if/when some content came from an LLM vs. from a human. That makes attribution useful, which makes it important to get right. And that's still the case even if you still object to the specific word "plagiarism".
I don't think your example works because in the book case there's a clear author whose ideas are being reproduced without permission. The LLM in your example is not the author but rather the printing press, and no one would argue that the printing press' ideas are being stolen because the press doesn't have any.
If one want to argue that "not citing the LLM would be plagiarism" then we would have to find the human at the end of the chain whose ideas are being reproduced, which would require LLMs to output "this idea was seen in the following training documents".
I remember open-source projects announcing their intent to leave GitHub in 2018, as it was being acquired by Microsoft. I was thinking to myself back then: "It's really just a free Git hosting service, and Git was designed to be decentralized at its very core. They don't own anything, only provide the storage and bandwidth. How are they even going to enshittify this?".
8 years later, this is where we are. I'm honestly just stunned, it takes some real talent to run a company that does it as consistently well as Microsoft.
If I recall correctly, what sparked the mass migration to GitHub was the controversy around SourceForge injecting ads into installers of projects hosted there. Now that we have tools that can stealthily inject native-looking ads into programs at the source code level...
Presumably you need to pay raycast once for a setup operation while you need to pay constantly for copilot. Why wouldn't you advertise for someone who makes you more money at the same time as advertising for yourself?
This is off the hook negligence and abuse they are training ads in on purpose now and think it's cool. We are doomed until it is all open source and only open source.
Well, CoPilot is a GitHub technology, and they're telling you that AI wrote the PR. It's not _that_ bad. I suppose they could distill it to "Written with CoPilot" with a link for more information.
Pull request, which is a request to merge changes in a git repository.
Or (not in this case) public relations , which is an interface with how the public views your product, service or company. In this case, copilot adding advertising into git pull requests is bad public relations for Microsoft, but the article author is referring to pull request as PR
The future is here! Glorious ads that will make you so efficient! Save time coding by consuming ads, you were never going to attain expert level professional skills anyways.
Similar to the Second Law of Thermodynamics which states entropy tends to increase over time in a closed system, I propose the Nth Law of Privatization: enshitification tends to increase with market capitalization/share over time.
It's the same with Claude Code actually, and recently Codex too...
Claude never used to do this but at some point it started adding itself by default as a co-author on every commit.
Literally, in the last week, Codex started making all it's branches as "codex-feature-name", and will continue to do so, even if you tell it to never do that again.
Adding the agent (and maybe more importantly, the model that review it) actually seems like a very useful signal to me. In fact, it really should become "best practice" for this type of workflow. Transparency is important, and some PMs may want to scrutinize those types of submissions more, or put them into a different pipeline, etc.
That Codex one comes from the new `github` plugin, which includes a `github:yeet` skill. There are several ways to disable it: you can disconnect github from codex entirely, or uninstall the plugin, or add this to your config.toml:
[[skills.config]]
name = "github:yeet"
enabled = false
I agree that skill is too opinionated as written, with effects beyond just creating branches.
What's weird is, I never installed any github plugins, or indeed any customization to Codex, other than updating using brew... so I was so confused when this started happening.
When I started my career there was this little company called SCO, and according to them finding a comment somewhere in someone’s suppliers code that matched “x < y” was serious enough to trip up the entire industry.
Now, with the power of math letting us recall business plans and code bases with no mention of copyright or where the underlying system got that code (like paying a foreign company to give me the kernel with my name replacing Linus’, only without the shame…), we are letting MS and other corps enter into coding automation and oopsie the name of their copyright-obfuscation machine?
Maybe it’s all crazy and we flubbed copyright fully, but having third party authorship stamps cryptographically verified in my repo sounds risky. The SCO thing was a dead companies last gasp, dying animals do desperate things.
Enshittification will ruin AI the same way it ruined the WWW and YouTube. We're in the golden era right now. Not 2027, 2028. Now now. The ads are coming.
Satya "please don't say slop" Nadella eat your heart out. Magnificent amounts of value are truly being added by this tech.
I'll add: it doesnt really matter if this was the integration dumbly appending a message or the llm inserting the ad. Judging by the response to this submission, sneaky ad slop is now firmly inside the overton window, so for MS it doesn't make sense NOT to do it.
I'm so tired of what initially looks like a perfect normal communication between two people, only to find that some third party has inserted itself like a parasite to exploit and extract human attention. That's why I use our sponsor, nord vpn ...
I have a somewhat similar problem with github issue templates. They automatically stuff I don't care about or would propose and structure things in ways I don't like. Granted, I can edited this away, but it requires extra time and makes filing issues more work than before. Biggest case in point is the "I will adhere to the Code of Conduct". In general I do not care about CoCs and it is fascinating how CoCs leak into everywhere for some so-called "open source" projects. They don't seem to understand the issue when the licence does not require a CoC; even then the issue is not about the CoC in and by itself (though I also find them pointless), but that extra content is automatically added to issue templates in general, CoCs just being one of many spam-options. And I also recall some donation-ads that are automatically added too - I have no problem when projects request financial support, but if I file an issue then the issue is about the content of the issue, not about anything else.
I'm not a fan of LLM's injecting themselves into PR/commit content. If you use multiple models, basically whichever one is operating git gets all the credit. But, even if you wrote all the code yourself, and just submitted the PR with Claude Code (or whatever) it would attempt to take credit for the changes.
I currently have rules in all of my skill files forbidding models from advertising themselves or taking credit.
Everyone is doing this now. Granted, on Codex / Claude Code, you can disable it, it’s not the default to have it disabled. For some reason on Cursor, they keep shoving the “Made with Cursor” into my PR description despite me disabling attribution, which looks really stupid on a work PR.
I’m so tired of all this BS. Why did this become normal? and how do we not read this as cheap advertising?
Using a LLM to fix a spelling mistake is retardedly lazy.
Presumably they used a free version of the LLM, therefore it is completely understandable that it inserted a snippet of text advertising its use into the output. I mean using a free email provider also adds a line of text to the end of every email advertising the service by default - "Sent from iPhone" etc.
sed fixes typos faster. The absurd part is watching devs burn prod tokens on glorifed autocorrect, wait through LLM lag for a spelling fix, and then act shocked when the output comes back as word salad with a coupon code glued to the end.
Using a LLM to fix a spelling mistake is retardedly lazy.
If you do it manually, sure.
If you have an agent watching for code changes and automatically opening PRs for small fixes that don't need a human-in-the-loop except for approving the change, it's the opposite of lazy. It eliminately all those tedious 1 point stories and let's the team focus on higher value work that actually needs a person to think about it.
Given time all small changes will be done this way, and eventually there won't be a person reviewing them.
That scenario doesn't require any explicit "summoning", and if there's a human in the loop approving the change, certainly they can fix the typo themself.
Sounds like a great use of energy and tokens, not overkill at all
As much as AI uses a lot of energy, having something that fixes issues in the background is very likely to be a net saving if you consider the number of users who fail to complete a task due to the bug and have to either wait in a broken state or retry later.
It's probably using less energy than a person fixing the issue too. That's a guess though.
I am doing my very small part by migrating large part of family and my employer away for a few years now. The world is better without Microslop. Buy unfortunately I know that this isn't always possible.
This looks like an ad for only Raycast which does not appear to be affiliated with Microsoft or GitHub at all so blaming Copilot or GitHub here is not justified.
Which does show that this is affiliated with GitHub unlike what I thought. There are no mentions of this string in a code repository on GitHub (including the Raycast copilot extention).
The path of reasoning the agent took that led it to generate the output. The GitHub search bits got posted after my comment, so while it is clearly real, it just seems injected by Raycast.
This is real. I do not have access to the path of reasoning, this ran through the GitHub copilot app which does not grant you access to the chain of thought.