Refactor fallback code to prepare for never type
This PR contains cherry-picks of some of `@nikomatsakis's` work from #79366, and shouldn't (AFAICT) represent any change in behavior. However, the refactoring is good regardless of the never type work being landed, and will reduce the size of those eventual PR(s) (and rebase pain).
I am not personally an expert on this code, and the commits are essentially 100% `@nikomatsakis's,` but they do seem reasonable to me by my understanding. Happy to edit with review, of course. Commits are best reviewed in sequence rather than all together.
r? `@jackh726` perhaps?
This matches Clang's behavior:
973cb2c326/clang/lib/CodeGen/CodeGenModule.cpp (L1038-L1040)
Even if `dso_local` were properly supported in this way on macOS, it seems
incorrect to add this annotation as liberally as we did. The `dso_local`
annotation is for symbols that ultimately end up in the same linkage unit, but
we were adding this annotation even for `static` values inside `extern` blocks
marked with `#[link(type="framework")]`, which should be considered dynamically
linked. Note that Clang likewise avoids emitting `dso_local` for `dllimport`
symbols:
973cb2c326/clang/lib/CodeGen/CodeGenModule.cpp (L1005-L1007)
This issue caused breakage in the `ring` crate, which links to a symbol defined
in `Security.framework` that ultimately resolves to address `0x0`:
b94d61e044/src/rand.rs (L390)
For this symbol, the use of `dso_local` causes LLVM to emit a relocation of
type `X86_64_RELOC_SIGNED`, which is a 32-bit signed PC-relative offset. If the
binary is large enough, `0x0` might be out of range, and the link will fail.
Avoiding `dso_local` causes LLVM to use the GOT instead, emitting a relocation
of type `X86_64_RELOC_GOT_LOAD`, which will properly handle the large offset
and cause the link to succeed.
As a side note, the static relocation model is effectively deprecated for
security reasons on macOS, as it prohibits PIE. It's also completely
unsupported on Apple Silicon, so I don't think it's worth going to the effort
of properly supporting this model on that platform.
Motivation: in upcoming commits, we are going to create a graph of the
coercion relationships between variables. We want to
distinguish *coercion* specifically from other sorts of subtyping, as
it indicates values flowing from one place to another via assignment.
I didn't like the sub-unify code executing when a predicate was
ENQUEUED, that felt fragile. I would have preferred to move the
sub-unify code so that it only occurred during generalization, but
that impacted diagnostics, so having it also occur when we process
subtype predicates felt pretty reasonable. (I guess we only need one
or the other, but I kind of prefer both, since the generalizer
ultimately feels like the *right* place to guarantee the properties we
want.)
where available use AtomicU{64,128} instead of mutex for Instant backsliding protection
This decreases the overhead of backsliding protection on x86 systems with unreliable TSC, e.g. windows. And on aarch64 systems where 128bit atomics are available.
The following benchmarks were taken on x86_64 linux though by overriding `actually_monotonic()`, the numbers may look different on other platforms
```
# actually_monotonic() == true
test time::tests::instant_contention_01_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_04_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_08_threads ... bench: 44 ns/iter (+/- 0)
test time::tests::instant_contention_16_threads ... bench: 44 ns/iter (+/- 0)
# 1x AtomicU64
test time::tests::instant_contention_01_threads ... bench: 65 ns/iter (+/- 0)
test time::tests::instant_contention_02_threads ... bench: 157 ns/iter (+/- 20)
test time::tests::instant_contention_04_threads ... bench: 281 ns/iter (+/- 53)
test time::tests::instant_contention_08_threads ... bench: 555 ns/iter (+/- 77)
test time::tests::instant_contention_16_threads ... bench: 883 ns/iter (+/- 107)
# mutex
test time::tests::instant_contention_01_threads ... bench: 60 ns/iter (+/- 2)
test time::tests::instant_contention_02_threads ... bench: 770 ns/iter (+/- 231)
test time::tests::instant_contention_04_threads ... bench: 1,347 ns/iter (+/- 45)
test time::tests::instant_contention_08_threads ... bench: 2,693 ns/iter (+/- 114)
test time::tests::instant_contention_16_threads ... bench: 5,244 ns/iter (+/- 487)
```
Since I don't have an arm machine with 128bit atomics I wasn't able to benchmark the AtomicU128 implementation.
I/O safety.
Introduce `OwnedFd` and `BorrowedFd`, and the `AsFd` trait, and
implementations of `AsFd`, `From<OwnedFd>` and `From<T> for OwnedFd`
for relevant types, along with Windows counterparts for handles and
sockets.
Tracking issue: <https://github.com/rust-lang/rust/issues/87074>
RFC: <https://github.com/rust-lang/rfcs/blob/master/text/3128-io-safety.md>
Highlights:
- The doc comments at the top of library/std/src/os/unix/io/mod.rs and library/std/src/os/windows/io/mod.rs
- The new types and traits in library/std/src/os/unix/io/fd.rs and library/std/src/os/windows/io/handle.rs
- The removal of the `RawHandle` struct the Windows impl, which had the same name as the `RawHandle` type alias, and its functionality is now folded into `Handle`.
Managing five levels of wrapping (File wraps sys::fs::File wraps sys::fs::FileDesc wraps OwnedFd wraps RawFd, etc.) made for a fair amount of churn and verbose as/into/from sequences in some places. I've managed to simplify some of them, but I'm open to ideas here.
r? `@joshtriplett`
Fix compiling other codegen backends when llvm is enabled
Extracted from #81746
Without this change rustbuild will not pass the required linker argument to find libllvm. While other backends likely don't use libllvm, it is necessary to be able to link against rustc_driver as the llvm backend is linked into it.
Add fast path for Path::cmp that skips over long shared prefixes
```
# before
test path::tests::bench_path_cmp_fast_path_buf_sort ... bench: 60,811 ns/iter (+/- 865)
test path::tests::bench_path_cmp_fast_path_long ... bench: 6,459 ns/iter (+/- 275)
test path::tests::bench_path_cmp_fast_path_short ... bench: 1,777 ns/iter (+/- 34)
# after
test path::tests::bench_path_cmp_fast_path_buf_sort ... bench: 38,140 ns/iter (+/- 211)
test path::tests::bench_path_cmp_fast_path_long ... bench: 1,471 ns/iter (+/- 24)
test path::tests::bench_path_cmp_fast_path_short ... bench: 1,106 ns/iter (+/- 9)
```
RFC2229 Only compute place if upvars can be resolved
Closes https://github.com/rust-lang/rust/issues/87987
This PR fixes an ICE when trying to unwrap an Err. This error appears when trying to convert a PlaceBuilder into Place when upvars can't yet be resolved. We should only try to convert a PlaceBuilder into Place if upvars can be resolved.
r? `@nikomatsakis`
RFC2229 Add missing edge case
Closes https://github.com/rust-lang/rust/issues/87988
This PR fixes an ICE where a match discriminant is not being read when expected. This ICE was the result of a missing edge case which assumed that if a pattern is of type `PatKind::TupleStruct(..) | PatKind::Path(..) | PatKind::Struct(..) | PatKind::Tuple(..)` then a place could only be a multi variant if the place is of type kind Adt.
We used to avoid doing this because we didn't want to make coercion depend on
the state of inference. For better or worse, we have moved away from this
position over time. Therefore, I am going to go ahead and resolve the `b`
target type early on so that it is done uniformly.
(The older technique for managing this was always something of a hack
regardless; if we really wanted to avoid integrating coercion and inference we
needed to be more disciplined about it.)
The name (and updated documentation) make the FFI-only usage clearer, and wrapping Option<OwnedHandle> avoids the need to write a separate Drop or Debug impl.
Co-authored-by: Josh Triplett <josh@joshtriplett.org>