Remove ordering traits from `OutlivesConstraint`
In two cases where this ordering was used, I've replaced the sorting to use a key that does not rely on `DefId` being `Ord`. This is part of #90317. If I understand correctly, whether this is correct depends on whether the `RegionVid`s are tracked during incremental compilation. But I might be mistaken in this approach. cc `@cjgillot`
Use error-on-mismatch policy for PAuth module flags.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.
This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
----
I believe that this fixes#92885, but have only reproduced it locally on Linux hosts so cannot confirm that it fixes the issue as reported.
I have not included a test for this because it is covered by an existing test (`src/test/run-make-fulldeps/cross-lang-lto-clang`). It is not without its problems, though:
* The test requires Clang and `--run-clang-based-tests-with=...` to run, and this is not the case on the CI.
* Any test I add would have a similar requirement.
* With this patch applied, the test gets further, but it still fails (for other reasons). I don't think that affects #92885.
Work around missing code coverage data causing llvm-cov failures
If we do not add code coverage instrumentation to the `Body` of a
function, then when we go to generate the function record for it, we
won't write any data and this later causes llvm-cov to fail when
processing data for the entire coverage report.
I've identified two main cases where we do not currently add code
coverage instrumentation to the `Body` of a function:
1. If the function has a single `BasicBlock` and it ends with a
`TerminatorKind::Unreachable`.
2. If the function is created using a proc macro of some kind.
For case 1, this is typically not important as this most often occurs as
a result of function definitions that take or return uninhabited
types. These kinds of functions, by definition, cannot even be called so
they logically should not be counted in code coverage statistics.
For case 2, I haven't looked into this very much but I've noticed while
testing this patch that (other than functions which are covered by case
1) the skipped function coverage debug message is occasionally triggered
in large crate graphs by functions generated from a proc macro. This may
have something to do with weird spans being generated by the proc macro
but this is just a guess.
I think it's reasonable to land this change since currently, we fail to
generate *any* results from llvm-cov when a function has no coverage
instrumentation applied to it. With this change, we get coverage data
for all functions other than the two cases discussed above.
Fixes#93054 which occurs because of uncallable functions which shouldn't
have code coverage anyway.
I will open an issue for missing code coverage of proc macro generated
functions and leave a link here once I have a more minimal repro.
r? ``@tmandry``
cc ``@richkadel``
Move param count error emission to end of `check_argument_types`
The error emission here isn't exactly what is done in #92364, but replicating that is hard . The general move should make for a smaller diff.
Also included the `(usize, Ty, Ty)` to -> `Option<(Ty, Ty)>` commit.
r? ``@estebank``
Properly track `DepNode`s in trait evaluation provisional cache
Fixes#92987
During evaluation of an auto trait predicate, we may encounter a cycle.
This causes us to store the evaluation result in a special 'provisional
cache;. If we later end up determining that the type can legitimately
implement the auto trait despite the cycle, we remove the entry from
the provisional cache, and insert it into the evaluation cache.
Additionally, trait evaluation creates a special anonymous `DepNode`.
All queries invoked during the predicate evaluation are added as
outoging dependency edges from the `DepNode`. This `DepNode` is then
store in the evaluation cache - if a different query ends up reading
from the cache entry, it will also perform a read of the stored
`DepNode`. As a result, the cached evaluation will still end up
(transitively) incurring all of the same dependencies that it would
if it actually performed the uncached evaluation (e.g. a call to
`type_of` to determine constituent types).
Previously, we did not correctly handle the interaction between the
provisional cache and the created `DepNode`. Storing an evaluation
result in the provisional cache would cause us to lose the `DepNode`
created during the evaluation. If we later moved the entry from the
provisional cache to the evaluation cache, we would use the `DepNode`
associated with the evaluation that caused us to 'complete' the cycle,
not the evaluatoon where we first discovered the cycle. As a result,
future reads from the evaluation cache would miss some incremental
compilation dependencies that would have otherwise been added if the
evaluation was *not* cached.
Under the right circumstances, this could lead to us trying to force
a query with a no-longer-existing `DefPathHash`, since we were missing
the (red) dependency edge that would have caused us to bail out before
attempting forcing.
This commit makes the provisional cache store the `DepNode` create
during the provisional evaluation. When we move an entry from the
provisional cache to the evaluation cache, we create a *new* `DepNode`
that has dependencies going to *both* of the evaluation `DepNodes` we
have available. This ensures that cached reads will incur all of
the necessary dependency edges.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.
This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
Revert "Do not hash leading zero bytes of i64 numbers in Sip128 hasher"
Reverts rust-lang/rust#92103. It had a (in retrospect, obvious) correctness problem where changing the order of two adjacent values would produce identical hashes, which is problematic in stable hashing (see [this comment](https://github.com/rust-lang/rust/pull/92103#issuecomment-1014625442)).
I'll try to send the PR again with a fix for this issue.
r? `@the8472`
Check `const Drop` impls considering `~const` Bounds
This PR adds logic to trait selection to account for `~const` bounds in custom `impl const Drop` for types, elaborates the `const Drop` check in `rustc_const_eval` to check those bounds, and steals some drop linting fixes from #92922, thanks `@DrMeepster.`
r? `@fee1-dead` `@oli-obk` <sup>(edit: guess I can't request review from two people, lol)</sup>
since each of you wrote and reviewed #88558, respectively.
Since the logic here is more complicated than what existed, it's possible that this is a perf regression. But it works correctly with tests, and that makes me happy.
Fixes#92881
rustc_mir_itertools: Avoid needless `collect` with itertools
I don't think this should have measurable perf impact (at least not on perf.rlo benchmarks), it's mostly for readability.
Liberate late bound regions when collecting GAT substs in wfcheck
The issue here is that the [`GATSubstCollector`](https://github.com/rust-lang/rust/blob/master/compiler/rustc_typeck/src/check/wfcheck.rs#L604) does not currently do anything wrt `Binder`s, so the GAT substs it copies out have escaping late bound regions when it walks through types like `for<'x> fn() -> Self::Gat<'x>`.
I made that visitor call `liberate_late_bound_regions`, not sure if that's the right thing here or we need to do something else to replace these bound vars with placeholders. I'm not familiar with other code doing anything similar.. But the issue is indeed no longer ICEing.
Fixes#92954
r? `@jackh726`
since you last touched this code, feel free to reassign
Normalize field access types during borrowck
I think a normalize was just left out here, since we normalize analogously throughout this file.
Fixes#93141
Add preliminary support for inline assembly for msp430.
The `llvm_asm` macro was removed recently, and the MSP430 backend relies on inline assembly to build useful embedded apps. I conveniently "found" time to implement basic support for the new inline `asm` macro syntax with the help of `@Amanieu` :D.
In addition to tests in the compiler, I have tested this locally against deployed MSP430 code and have not found any noticeable differences in firmware operation or `objdump` disassemblies between the old `llvm_asm` and the new `asm` syntax.
rustc_lint: Some early linting refactorings
The first one removes and renames some fields and methods from `EarlyContext`.
The second one uses the set of registered tools (for tool attributes and tool lints) in a more centralized way.
The third one removes creation of a fake `ast::Crate` from `fn pre_expansion_lint`.
Pre-expansion linting is done with per-module granularity on freshly loaded modules, and it previously synthesized an `ast::Crate` to visit non-root modules, now they are visited as modules.
The node ID used for pre-expansion linting is also made more precise (the loaded module ID is used).
Make `Decodable` and `Decoder` infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this PR is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
r? `@bjorn3`
Use consistent function parameter order for early context construction and early linting
Rename some functions to make it clear that they do not necessarily work on the whole crate
Disable drop range tracking in generators
Generator drop tracking caused an ICE for generators involving the Never type (Issue #93161). Since this breaks a test case with miri, we temporarily disable drop tracking so miri is unblocked while we properly fix the issue.
Reject unsupported naked functions
Transition unsupported naked functions future incompatibility lint into an error:
* Naked functions must contain a single inline assembly block. Introduced as future incompatibility lint in 1.50 #79653. Change into an error fixes a soundness issue described in #32489.
* Naked functions must not use any forms of inline attribute. Introduced as future incompatibility lint in 1.56 #87652.
Closes#32490.
Closes#32489.
r? ```@Amanieu``` ```@npmccallum``` ```@joshtriplett```
Print a helpful message if unwinding aborts when it reaches a nounwind function
This is implemented by routing `TerminatorKind::Abort` back through the panic handler, but with a special flag in the `PanicInfo` which indicates that the panic handler should *not* attempt to unwind the stack and should instead abort immediately.
This is useful for the planned change in https://github.com/rust-lang/lang-team/issues/97 which would make `Drop` impls `nounwind` by default.
### Code
```rust
#![feature(c_unwind)]
fn panic() {
panic!()
}
extern "C" fn nounwind() {
panic();
}
fn main() {
nounwind();
}
```
### Before
```
$ ./test
thread 'main' panicked at 'explicit panic', test.rs:4:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Illegal instruction (core dumped)
```
### After
```
$ ./test
thread 'main' panicked at 'explicit panic', test.rs:4:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at 'panic in a function that cannot unwind', test.rs:7:1
stack backtrace:
0: 0x556f8f86ec9b - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hdccefe11a6ac4396
1: 0x556f8f88ac6c - core::fmt::write::he152b28c41466ebb
2: 0x556f8f85d6e2 - std::io::Write::write_fmt::h0c261480ab86f3d3
3: 0x556f8f8654fa - std::panicking::default_hook::{{closure}}::h5d7346f3ff7f6c1b
4: 0x556f8f86512b - std::panicking::default_hook::hd85803a1376cac7f
5: 0x556f8f865a91 - std::panicking::rust_panic_with_hook::h4dc1c5a3036257ac
6: 0x556f8f86f079 - std::panicking::begin_panic_handler::{{closure}}::hdda1d83c7a9d34d2
7: 0x556f8f86edc4 - std::sys_common::backtrace::__rust_end_short_backtrace::h5b70ed0cce71e95f
8: 0x556f8f865592 - rust_begin_unwind
9: 0x556f8f85a764 - core::panicking::panic_no_unwind::h2606ab3d78c87899
10: 0x556f8f85b910 - test::nounwind::hade6c7ee65050347
11: 0x556f8f85b936 - test::main::hdc6e02cb36343525
12: 0x556f8f85b7e3 - core::ops::function::FnOnce::call_once::h4d02663acfc7597f
13: 0x556f8f85b739 - std::sys_common::backtrace::__rust_begin_short_backtrace::h071d40135adb0101
14: 0x556f8f85c149 - std::rt::lang_start::{{closure}}::h70dbfbf38b685e93
15: 0x556f8f85c791 - std::rt::lang_start_internal::h798f1c0268d525aa
16: 0x556f8f85c131 - std::rt::lang_start::h476a7ee0a0bb663f
17: 0x556f8f85b963 - main
18: 0x7f64c0822b25 - __libc_start_main
19: 0x556f8f85ae8e - _start
20: 0x0 - <unknown>
thread panicked while panicking. aborting.
Aborted (core dumped)
```
add support for the l4-bender linker on the x86_64-unknown-l4re-uclibc tier 3 target
This PR contains the work by ```@humenda``` to update support for the `x86_64-unknown-l4re-uclibc` tier 3 target (published at [humenda/rust](https://github.com/humenda/rust)), rebased and adapted to current rust in follow up commits by myself. The publishing of the rebased changes is authorized and preferred by the original author. As the goal was to distort the original work as little as possible, individual commits introduce changes that are incompatible to the newer code base that the changes were rebased on. These incompatibilities have been remedied in follow up commits, so that the PR as a whole should result in a clean update of the target.
If you prefer another strategy to mainline these changes while preserving attribution, please let me know.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.