Change the type of `AssertModuleSource::available_cgus`.
It's currently a `BTreeSet<Symbol>`, which is a strange type. The
`BTreeSet` suggests that element order is important, but `Symbol` is a
type whose ordering isn't useful to humans. The ordering of the
collection only manifests in an obscure error message ("no module named
`...`") that doesn't appear in any tests.
This commit changes the `Symbol` to a `String`, which is more
typical.
It's currently a `BTreeSet<Symbol>`, which is a strange type. The
`BTreeSet` suggests that element order is important, but `Symbol` is a
type whose ordering isn't useful to humans. The ordering of the
collection only manifests in an obscure error message ("no module named
`...`") that doesn't appear in any tests.
This commit changes the `Symbol` to a `String`, which is more
typical.
rustc_metadata: track the simplified Self type for every trait impl.
For the `traits_impls_of` query, we index the impls by `fast_reject::SimplifiedType` (a "shallow type"), which allows some simple cases like `impl Trait<..> for Foo<..>` to be efficiently iterated over, by e.g. `for_each_relevant_impl`.
This PR encodes the `fast_reject::SimplifiedType` cross-crate to avoid needing to deserialize the `Self` type of every `impl` in order to simplify it - the simplification itself should be cheap, but the deserialization is less so.
We could go further from here and make loading the list of impls lazy, for a given simplified `Self` type, but that would have more complicated implications for performance, and this PR doesn't do anything in that regard.
r? @nikomatsakis cc @Mark-Simulacrum
Limit I/O vector count on Unix
Unix systems enforce limits on the vector count when performing vectored I/O via the readv and writev system calls and return EINVAL when these limits are exceeded. This changes the standard library to handle those limits as short reads and writes to avoid forcing its users to query these limits using platform specific mechanisms.
Fixes#68042
Show backtrace numbers in backtrace whenever more than one is involved
Previously, we only displayed 'frame' numbers in a macro backtrace when more
than two frames were involved. This commit should help make backtrace
more readable, since these kinds of messages can quickly get confusing.
Previously, we only displayed 'frame' numbers in a macro backtrace when more
than two frames were involved. This commit should help make backtrace
more readable, since these kinds of messages can quickly get confusing.
All #[cfg(unix)] platforms follow the POSIX standard and define _SC_IOV_MAX so
that we rely purely on POSIX semantics to determine the limits on I/O vector
count.
Keep the I/O vector count limit in a `SyncOnceCell` to avoid the overhead of
repeatedly calling `sysconf` as these limits are guaranteed to not change during
the lifetime of a process by POSIX.
Both Linux and MacOS enforce limits on the vector count when performing vectored
I/O via the readv and writev system calls and return EINVAL when these limits
are exceeded. This changes the standard library to handle those limits as short
reads and writes to avoid forcing its users to query these limits using
platform specific mechanisms.
Add regression test for #72787
Add regression test for Issue #72787Fixes#72787
~~Still waiting on running tests locally to bless the error output~~
r? @lcnr
polymorphization: various improvements
This PR includes a handful of polymorphisation-related changes:
- @Mark-Simulacrum's suggestions [from this comment](https://github.com/rust-lang/rust/pull/74633#issuecomment-668684433):
- Use a `FiniteBitSet<u32>` over a `FiniteBitSet<u64>` as most functions won't have 64 generic parameters.
- Don't encode polymorphisation results in metadata when every parameter is used (in this case, just invoking polymorphisation will probably be quicker).
- @lcnr's suggestion [from this comment](https://github.com/rust-lang/rust/pull/74717#discussion_r463690015).
- Add an debug assertion in `ensure_monomorphic_enough` to make sure that polymorphisation did what we expect.
r? @lcnr
Completes support for coverage in external crates
Follow-up to #74959 :
The prior PR corrected for errors encountered when trying to generate
the coverage map on source code inlined from external crates (including
macros and generics) by avoiding adding external DefIds to the coverage
map.
This made it possible to generate a coverage report including external
crates, but the external crate coverage was incomplete (did not include
coverage for the DefIds that were eliminated.
The root issue was that the coverage map was converting Span locations
to source file and locations, using the SourceMap for the current crate,
and this would not work for spans from external crates (compliled with a
different SourceMap).
The solution was to convert the Spans to filename and location during
MIR generation instead, so precompiled external crates would already
have the correct source code locations embedded in their MIR, when
imported into another crate.
@wesleywiser FYI
r? @tmandry