2229: Fix diagnostic issue when using FakeReads in closures
This PR fixes a diagnostic issue caused by https://github.com/rust-lang/rust/pull/82536. A temporary work around was used in this merged PR which involved feature gating the addition of FakeReads introduced as a result of pattern matching in closures.
The fix involves adding an optional closure DefId to ForLet and ForMatchedPlace FakeReadCauses. This DefId will only be added if a closure pattern matches a Place starting with an Upvar.
r? ```@nikomatsakis```
Add an unstable --json=unused-externs flag to print unused externs
This adds an unstable flag to print a list of the extern names not used by cargo.
This PR will enable cargo to collect unused dependencies from all units and provide warnings.
The companion PR to cargo is: https://github.com/rust-lang/cargo/pull/8437
The goal is eventual stabilization of this flag in rustc as well as in cargo.
Discussion of this feature is mostly contained inside these threads: #57274#72342#72603
The feature builds upon the internal datastructures added by #72342
Externs are uniquely identified by name and the information is sufficient for cargo.
If the mode is enabled, rustc will print json messages like:
```
{"unused_extern_names":["byteorder","openssl","webpki"]}
```
For a crate that got passed byteorder, openssl and webpki dependencies but needed none of them.
### Q: Why not pass -Wunused-crate-dependencies?
A: See [ehuss's comment here](https://github.com/rust-lang/rust/issues/57274#issuecomment-624839355)
TLDR: it's cleaner. Rust's warning system wasn't built to be filtered or edited by cargo.
Even a basic implementation of the feature would have to change the "n warnings emitted" line that rustc prints at the end.
Cargo ideally wants to synthesize its own warnings anyways. For example, it would be hard for rustc to emit warnings like
"dependency foo is only used by dev targets", suggesting to make it a dev-dependency instead.
### Q: Make rustc emit used or unused externs?
A: Emitting used externs has the advantage that it simplifies cargo's collection job.
However, emitting unused externs creates less data to be communicated between rustc and cargo.
Often you want to paste a cargo command obtained from `cargo build -vv` for doing something
completely unrelated. The message is emitted always, even if no warning or error is emitted.
At that point, even this tiny difference in "noise" matters. That's why I went with emitting unused externs.
### Q: One json msg per extern or a collective json msg?
A: Same as above, the data format should be concise. Having 30 lines for the 30 crates a crate uses would be disturbing to readers.
Also it helps the cargo implementation to know that there aren't more unused deps coming.
### Q: Why use names of externs instead of e.g. paths?
A: Names are both sufficient as well as neccessary to uniquely identify a passed `--extern` arg.
Names are sufficient because you *must* pass a name when passing an `--extern` arg.
Passing a path is optional on the other hand so rustc might also figure out a crate's location from the file system.
You can also put multiple paths for the same extern name, via e.g. `--extern hello=/usr/lib/hello.rmeta --extern hello=/usr/local/lib/hello.rmeta`,
but rustc will only ever use one of those paths.
Also, paths don't identify a dependency uniquely as it is possible to have multiple different extern names point to the same path.
So paths are ill-suited for identification.
### Q: What about 2015 edition crates?
A: They are fully supported.
Even on the 2015 edition, an explicit `--extern` flag is is required to enable `extern crate foo;` to work (outside of sysroot crates, which this flag doesn't warn about anyways).
So the lint would still fire on 2015 edition crates if you haven't included a dependency specified in Cargo.toml using `extern crate foo;` or similar.
The lint won't fire if your sole use in the crate is through a `extern crate foo;` statement, but that's not its job.
For detecting unused `extern crate foo` statements, there is the `unused_extern_crates` lint
which can be enabled by `#![warn(unused_extern_crates)]` or similar.
cc ```@jsgf``` ```@ehuss``` ```@petrochenkov``` ```@estebank```
Remove nightly features in rustc_type_ir
`rustc_type_ir` will be used as a type library by Chalk, which we want to be able to build on stable, so this PR removes the current nightly features used.
Add an Mmap wrapper to rustc_data_structures
This wrapper implements StableAddress and falls back to directly reading the file on wasm32.
Taken from #83640, which I will close due to the perf regression.
Translate counters from Rust 1-based to LLVM 0-based counter ids
A colleague contacted me and asked why Rust's counters start at 1, when
Clangs appear to start at 0. There is a reason why Rust's internal
counters start at 1 (see the docs), and I tried to keep them consistent
when codegenned to LLVM's coverage mapping format. LLVM should be
tolerant of missing counters, but as my colleague pointed out,
`llvm-cov` will silently fail to generate a coverage report for a
function based on LLVM's assumption that the counters are 0-based.
See:
https://github.com/llvm/llvm-project/blob/main/llvm/lib/ProfileData/Coverage/CoverageMapping.cpp#L170
Apparently, if, for example, a function has no branches, it would have
exactly 1 counter. `CounterValues.size()` would be 1, and (with the
1-based index), the counter ID would be 1. This would fail the check
and abort reporting coverage for the function.
It turns out that by correcting for this during coverage map generation,
by subtracting 1 from the Rust Counter ID (both when generating the
counter increment intrinsic call, and when adding counters to the map),
some uncovered functions (including in tests) now appear covered! This
corrects the coverage for a few tests!
r? `@tmandry`
FYI: `@wesleywiser`
Avoid sorting by DefId for `necessary_variants()`
Follow-up to https://github.com/rust-lang/rust/pull/83074. Originally I tried removing `impl Ord for DefId` but that hit *lots* of errors 😅 so I thought I would start with easy things.
I am not sure whether this could actually cause invalid query results, but this is used from `MarkSymbolVisitor::visit_arm` so it's at least feasible.
r? `@Aaron1011`
A colleague contacted me and asked why Rust's counters start at 1, when
Clangs appear to start at 0. There is a reason why Rust's internal
counters start at 1 (see the docs), and I tried to keep them consistent
when codegenned to LLVM's coverage mapping format. LLVM should be
tolerant of missing counters, but as my colleague pointed out,
`llvm-cov` will silently fail to generate a coverage report for a
function based on LLVM's assumption that the counters are 0-based.
See:
https://github.com/llvm/llvm-project/blob/main/llvm/lib/ProfileData/Coverage/CoverageMapping.cpp#L170
Apparently, if, for example, a function has no branches, it would have
exactly 1 counter. `CounterValues.size()` would be 1, and (with the
1-based index), the counter ID would be 1. This would fail the check
and abort reporting coverage for the function.
It turns out that by correcting for this during coverage map generation,
by subtracting 1 from the Rust Counter ID (both when generating the
counter increment intrinsic call, and when adding counters to the map),
some uncovered functions (including in tests) now appear covered! This
corrects the coverage for a few tests!
Maintain supported sanitizers as a target property
In an effort to remove a hard-coded allow-list for target-sanitizer support correspondence, this PR moves the configuration to the target options.
Perhaps the one notable change made in this PR is this doc-comment:
```rust
/// The sanitizers supported by this target
///
/// Note that the support here is at a codegen level. If the machine code with sanitizer
/// enabled can generated on this target, but the necessary supporting libraries are not
/// distributed with the target, the sanitizer should still appear in this list for the target.
```
Previously the target would typically be added to the allow-list at the same time as the supporting runtime libraries are shipped for the target. However whether we ship the runtime libraries or not needn't be baked into the compiler; and if we don't users will receive a significantly more directed error about library not being found.
Fixes#81802
This commit adds an additional target property – `supported_sanitizers`,
and replaces the hardcoded allowlists in argument parsing to use this
new property.
Fixes#81802
2229: Support migration via rustfix
- Adds support of machine applicable suggestions for `disjoint_capture_drop_reorder`.
- Doesn't migrate in the case of pre-existing bugs in user code
r? ``@nikomatsakis``
Only public items are monomorphization roots. This can be confirmed by noting that this program compiles:
```rust
fn foo<T>() { if true { foo::<Option<T>>() } }
fn bar() { foo::<()>() }
```
Rollup of 5 pull requests
Successful merges:
- #83535 (Break when there is a mismatch in the type count)
- #83721 (Add a button to copy the "use statement")
- #83740 (Fix comment typo in once.rs)
- #83745 (Add my new email address to .mailmap)
- #83754 (Add test to ensure search tabs behaviour)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Break when there is a mismatch in the type count
When other errors are generated, there can be a mismatch between the
amount of input types in MIR, and the amount in the function itself.
Break from the comparative loop if this is the case to prevent
out-of-bounds.
Fixes#83499
normalize mir::Constant differently from ty::Const in preparation for valtrees
Valtrees are unable to represent many kind of constant values (this is on purpose). For constants that are used at runtime, we do not need a valtree representation and can thus use a different form of evaluation. In order to make this explicit and less fragile, I added a `fold_constant` method to `TypeFolder` and implemented it for normalization. Normalization can now, when it wants to eagerly evaluate a constant, normalize `mir::Constant` directly into a `mir::ConstantKind::Val` instead of relying on the `ty::Const` evaluation.
In the future we can get rid of the `ty::Const` in there entirely and add our own `Unevaluated` variant to `mir::ConstantKind`. This would allow us to remove the `promoted` field from `ty::ConstKind::Unevaluated`, as promoteds can never occur in the type system.
cc `@rust-lang/wg-const-eval`
r? `@lcnr`
Rename `#[doc(spotlight)]` to `#[doc(notable_trait)]`
Fixes#80936.
"spotlight" is not a very specific or self-explaining name.
Additionally, the dialog that it triggers is called "Notable traits".
So, "notable trait" is a better name.
* Rename `#[doc(spotlight)]` to `#[doc(notable_trait)]`
* Rename `#![feature(doc_spotlight)]` to `#![feature(doc_notable_trait)]`
* Update documentation
* Improve documentation
r? `@Manishearth`
Fix expected/found order on impl trait projection mismatch error
fixes#68561
This PR adds a new `ObligationCauseCode` used when checking the concrete type of an impl trait satisfies its bounds, and checks for that cause code in the existing test to see if a projection's normalized type should be the "expected" or "found" type.
The second commit adds a `peel_derives` to that test, which appears to be necessary in some cases (see projection-mismatch-in-impl-where-clause.rs, which would still give expected/found in the wrong order otherwise). This caused some other changes in diagnostics not involving impl trait, but they look correct to me.
Stream the dep-graph to a file instead of storing it in-memory.
This is a reimplementation of #60035.
Instead of storing the dep-graph in-memory, the nodes are encoded as they come
into the a temporary file as they come. At the end of a successful the compilation,
this file is renamed to be the persistent dep-graph, to be decoded during the next
compilation session.
This two-files scheme avoids overwriting the dep-graph on unsuccessful or crashing compilations.
The structure of the file is modified to be the sequence of `(DepNode, Fingerprint, EdgesVec)`.
The deserialization is responsible for going to the more compressed representation.
The `node_count` and `edge_count` are stored in the last 16 bytes of the file,
in order to accurately reserve capacity for the vectors.
At the end of the compilation, the encoder is flushed and dropped.
The graph is not usable after this point: any creation of a node will ICE.
I had to retrofit the debugging options, which is not really pretty.