Add more checks for `unnamed_fields` during HIR analysis
Fixes#121151
I also found that we don't prevent enums here so
```rs
#[repr(C)]
#[derive(Debug)]
enum A {
#[default]
B,
C,
}
#[repr(C)]
#[derive(Debug)]
struct D {
_: A,
}
```
leads to an ICE on an `self.is_struct() || self.is_union()` assertion, so fixed that too.
rustdoc: fix and refactor HTML rendering a bit
* refactoring: get rid of a bunch of manual `f.alternate()` branches
* not sure why this wasn't done so already, is this perf-sensitive?
* fix an ICE in debug builds of rustdoc
* rustdoc used to crash on empty outlives-bounds: `where 'a:`
* properly escape const generic defaults
* actually print empty trait and outlives-bounds (doesn't work for cross-crate reexports yet, will fix that at some other point) since they can have semantic significance
* outlives-bounds: forces lifetime params to be early-bound instead of late-bound which is technically speaking part of the public API
* trait-bounds: can affect the well-formedness, consider
* makeshift “const-evaluatable” bounds under `generic_const_exprs`
* bounds to force wf-checking in light of #100041 (quite artificial I know, I couldn't figure out something better), see https://github.com/rust-lang/rust/pull/121160#discussion_r1491563816
Properly deal with weak alias types as self types of impls
Fixes#114216.
Fixes#116100.
Not super happy about the two ad hoc “normalization” implementations for weak alias types:
1. In `inherent_impls`: The “peeling”, normalization to [“WHNF”][whnf]: Semantically that's exactly what we want (neither proper normalization nor shallow normalization would be correct here). Basically a weak alias type is “nominal” (well...^^) if the WHNF is nominal. [#97974](https://github.com/rust-lang/rust/pull/97974) followed the same approach.
2. In `constrained_generic_params`: Generic parameters are constrained by a weak alias type if the corresp. “normalized” type constrains them (where we only normalize *weak* alias types not arbitrary ones). Weak alias types are injective if the corresp. “normalized” type is injective.
Both have ad hoc overflow detection mechanisms.
**Coherence** is handled in #117164.
r? `@oli-obk` or types
[whnf]: https://en.wikipedia.org/wiki/Lambda_calculus_definition#Weak_head_normal_form
When these extra directives were ported over as part of #112300, it made sense
to introduce `iter_header_extra` and pass them in as an extra argument.
But now that #120881 has added a `mode` parameter to `iter_header` for its own
purposes, it's slightly simpler to move the coverage special-case code directly
into `iter_header` as well. This lets us get rid of `iter_header_extra`.
As the reason for tidy failing the test name may seem slightly opaque,
provide a suggestion on how to rename the file.
There are some tests which merely would require reordering of the name
according to the rule. I could modify the diagnostic further to identify
those, but doing such would make it prone to bad suggestions. I have opted
to trust contributors to recognize the diagnostic is robotic, as the
pattern we are linting on is easier to match if we do not speculate on what
parts of the name are meaningful: sometimes a word is a reason, but
sometimes it is a mere "tag", such as with a pair like:
issue-314159265-blue.rs
issue-314159265-red.rs
Starting them with `red-` and `blue-` means they do not sort together,
despite being related, and the color names are still not very descriptive.
Recognizing a good name is an open-ended task, though this pair might be:
colored-circle-gen-blue.rs
colored-circle-gen-red.rs
Deciding exactly how to solve this is not the business of tidy,
only recognizing a what.
The moment we get a candidate without guard, the return block becomes a
fresh block linked to nothing. So we can keep assigning a fresh block
every iteration to reuse the `next_prebinding` logic.
create stamp file for clippy
Due to missing stamp file, we were downloading (and applying nix patches if enabled) continuously every time `Config::download_clippy` was called. This change fixes that by creating stamp file at the end of the function.
Fixes#119442
Fix `cfg(target_abi = "sim")` on `i386-apple-ios`
Since https://github.com/rust-lang/rust/issues/80970 is stabilizing, I went and had a look, and found that the result was wrong on `i386-apple-ios`.
r? rust-lang/macos
Use fulfillment in next trait solver coherence
Use fulfillment in the new trait solver's `impl_intersection_has_impossible_obligation` routine. This means that inference that falls out of processing other obligations can influence whether we can determine if an obligation is impossible to satisfy. See the committed test.
This should be completely sound, since evaluation and fulfillment both respect intercrate mode equally.
We run the risk of breaking coherence later if we were to change the rules of fulfillment and/or inference during coherence, but this is a problem which affects evaluation, as nested obligations from a trait goal are processed together and can influence each other in the same way.
r? lcnr
cc #114862
Also changed obligationctxt -> fulfillmentctxt because it feels kind of redundant to use an ocx here. I don't really care enough and can change it back if it really matters much.
Fix typo in VecDeque::handle_capacity_increase() doc comment.
Strategies B and C both show a full buffer before the capacity increase, while strategy A had one empty element left. Filled the last element in.
errors: only eagerly translate subdiagnostics
Subdiagnostics don't need to be lazily translated, they can always be eagerly translated. Eager translation is slightly more complex as we need to have a `DiagCtxt` available to perform the translation, which involves slightly more threading of that context.
This slight increase in complexity should enable later simplifications - like passing `DiagCtxt` into `AddToDiagnostic` and moving Fluent messages into the diagnostic structs rather than having them in separate files (working on that was what led to this change).
r? ```@nnethercote```
Don't use mem::zeroed in vec::IntoIter
`mem::zeroed` is not a trivial function. Maybe it was once, but now it involves multiple locals, copies, and an intrinsic that gets monomorphized into a call to `panic_nounwind` for iterators of types like `Vec<&T>`. Of course all that complexity is trivially optimized out, but generating a bunch of IR where we don't need to just so we can optimize it away later is silly.
tidy: reduce allocs
this reduces allocs in tidy from (dhat output)
```
==31349== Total: 1,365,199,543 bytes in 4,774,213 blocks
==31349== At t-gmax: 10,975,708 bytes in 66,093 blocks
==31349== At t-end: 2,880,947 bytes in 12,332 blocks
==31349== Reads: 5,210,008,956 bytes
==31349== Writes: 1,280,920,127 bytes
```
to
```
==66633== Total: 791,565,538 bytes in 3,503,144 blocks
==66633== At t-gmax: 10,914,511 bytes in 65,997 blocks
==66633== At t-end: 395,531 bytes in 941 blocks
==66633== Reads: 4,249,388,949 bytes
==66633== Writes: 814,119,580 bytes
```
<del>by wrapping regex and updating `ignore` (effect probably not only from `ignore`, didn't measured)</del>
also moves one more regex into `Lazy` to reduce regex rebuilds.