Commit graph

16864 commits

Author SHA1 Message Date
Ralf Jung
a87f5ca917 sys/unix: add comments for some Miri fallbacks 2024-10-13 12:35:06 +02:00
Ralf Jung
8d0a0b000c remove outdated comment now that Miri is on CI 2024-10-13 12:30:23 +02:00
Ralf Jung
2ae3b1b09a sys/windows: remove miri hack that is only needed for win7 2024-10-13 12:30:23 +02:00
Ralf Jung
90e4f10f6c switch unicode-data back to 'static' 2024-10-13 11:53:06 +02:00
Ralf Jung
1ebfd97051 merge const_ipv4 / const_ipv6 feature gate into 'ip' feature gate 2024-10-13 09:55:34 +02:00
Trevor Gross
415e61c209
Rollup merge of #131418 - coolreader18:wasm-exc-use-stdarch, r=bjorn3
Use throw intrinsic from stdarch in wasm libunwind

Tracking issue: #118168

This is a very belated followup to #121438; now that rust-lang/stdarch#1542 is merged, we can use the intrinsic exported from `core::arch` instead of defining it inline. I also cleaned up the cfgs a bit and added a more detailed comment.
2024-10-12 21:38:36 -05:00
Trevor Gross
c8b2f7e458
Rollup merge of #131120 - tgross35:stabilize-const_option, r=RalfJung
Stabilize `const_option`

This makes the following API stable in const contexts:

```rust
impl<T> Option<T> {
    pub const fn as_mut(&mut self) -> Option<&mut T>;
    pub const fn expect(self, msg: &str) -> T;
    pub const fn unwrap(self) -> T;
    pub const unsafe fn unwrap_unchecked(self) -> T;
    pub const fn take(&mut self) -> Option<T>;
    pub const fn replace(&mut self, value: T) -> Option<T>;
}

impl<T> Option<&T> {
    pub const fn copied(self) -> Option<T>
    where T: Copy;
}

impl<T> Option<&mut T> {
    pub const fn copied(self) -> Option<T>
    where T: Copy;
}

impl<T, E> Option<Result<T, E>> {
    pub const fn transpose(self) -> Result<Option<T>, E>
}

impl<T> Option<Option<T>> {
    pub const fn flatten(self) -> Option<T>;
}
```

The following functions make use of the unstable `const_precise_live_drops` feature:

- `expect`
- `unwrap`
- `unwrap_unchecked`
- `transpose`
- `flatten`

Fixes: <https://github.com/rust-lang/rust/issues/67441>
2024-10-12 21:38:35 -05:00
beetrees
feecfaa18d
Fix bug where option_env! would return None when env var is present but not valid Unicode 2024-10-13 02:10:19 +01:00
Andreas Molzer
c128b4c433 Fix typo thing->thin referring to pointer 2024-10-13 02:35:09 +02:00
Trevor Gross
19f6c17df4 Stabilize const_option
This makes the following API stable in const contexts:

    impl<T> Option<T> {
        pub const fn as_mut(&mut self) -> Option<&mut T>;
        pub const fn expect(self, msg: &str) -> T;
        pub const fn unwrap(self) -> T;
        pub const unsafe fn unwrap_unchecked(self) -> T;
        pub const fn take(&mut self) -> Option<T>;
        pub const fn replace(&mut self, value: T) -> Option<T>;
    }

    impl<T> Option<&T> {
        pub const fn copied(self) -> Option<T>
        where T: Copy;
    }

    impl<T> Option<&mut T> {
        pub const fn copied(self) -> Option<T>
        where T: Copy;
    }

    impl<T, E> Option<Result<T, E>> {
        pub const fn transpose(self) -> Result<Option<T>, E>
    }

    impl<T> Option<Option<T>> {
        pub const fn flatten(self) -> Option<T>;
    }

The following functions make use of the unstable
`const_precise_live_drops` feature:

- `expect`
- `unwrap`
- `unwrap_unchecked`
- `transpose`
- `flatten`

Fixes: <https://github.com/rust-lang/rust/issues/67441>
2024-10-12 17:07:13 -04:00
Matthias Krüger
de72917050
Rollup merge of #131617 - RalfJung:const_cow_is_borrowed, r=tgross35
remove const_cow_is_borrowed feature gate

The two functions guarded by this are still unstable, and there's no reason to require a separate feature gate for their const-ness -- we can just have `cow_is_borrowed` cover both kinds of stability.

Cc #65143
2024-10-12 23:00:59 +02:00
Matthias Krüger
c0f16828a1
Rollup merge of #131503 - theemathas:stdin_read_line_docs, r=Mark-Simulacrum
More clearly document Stdin::read_line

These are common pitfalls for beginners, so I think it's worth making the subtleties more visible.
2024-10-12 23:00:57 +02:00
Ralf Jung
a0661ec331 remove const_cow_is_borrowed feature gate 2024-10-12 19:48:28 +02:00
Trevor Gross
ca3c822068
Rollup merge of #131233 - joboet:stdout-before-main, r=tgross35
std: fix stdout-before-main

Fixes #130210.

Since #124881, `ReentrantLock` uses `ThreadId` to identify threads. This has the unfortunate consequence of breaking uses of `Stdout` before main: Locking the `ReentrantLock` that synchronizes the output will initialize the thread ID before the handle for the main thread is set in `rt::init`. But since that would overwrite the current thread ID, `thread::set_current` triggers an abort.

This PR fixes the problem by using the already initialized thread ID for constructing the main thread handle and allowing `set_current` calls that do not change the thread's ID.
2024-10-12 11:08:43 -05:00
Trevor Gross
8a86f1dd8c
Rollup merge of #130954 - workingjubilee:stabilize-const-mut-fn, r=RalfJung
Stabilize const `ptr::write*` and `mem::replace`

Since `const_mut_refs` and `const_refs_to_cell` have been stabilized, we may now also stabilize the ability to write to places during const evaluation inside our library API. So, we now propose the `const fn` version of `ptr::write` and its variants. This allows us to also stabilize `mem::replace` and `ptr::replace`.
- const `mem::replace`: https://github.com/rust-lang/rust/issues/83164#issuecomment-2338660862
- const `ptr::write{,_bytes,_unaligned}`: https://github.com/rust-lang/rust/issues/86302#issuecomment-2330275266

Their implementation requires an additional internal stabilization of `const_intrinsic_forget`, which is required for `*::write*` and thus `*::replace`. Thus we const-stabilize the internal intrinsics `forget`, `write_bytes`, and `write_via_move`.
2024-10-12 11:08:42 -05:00
joboet
9f91c5099f
std: fix stdout-before-main
Fixes #130210.

Since #124881, `ReentrantLock` uses `ThreadId` to identify threads. This has the unfortunate consequence of breaking uses of `Stdout` before main: Locking the `ReentrantLock` that synchronizes the output will initialize the thread ID before the handle for the main thread is set in `rt::init`. But since that would overwrite the current thread ID, `thread::set_current` triggers an abort.

This PR fixes the problem by using the already initialized thread ID for constructing the main thread handle and allowing `set_current` calls that do not change the thread's ID.
2024-10-12 13:01:36 +02:00
Jubilee Young
187c8b0ce9 library: Stabilize const_replace
Depends on stabilizing `const_ptr_write`.

Const-stabilizes:
- `core::mem::replace`
- `core::ptr::replace`
2024-10-12 00:02:38 -07:00
Jubilee Young
ddc367ded7 library: Stabilize const_ptr_write
Const-stabilizes:
- `write`
- `write_bytes`
- `write_unaligned`

In the following paths:
- `core::ptr`
- `core::ptr::NonNull`
- pointer `<*mut T>`

Const-stabilizes the internal `core::intrinsics`:
- `write_bytes`
- `write_via_move`
2024-10-12 00:02:36 -07:00
Jubilee Young
9a523001e3 library: Stabilize const_intrinsic_forget
This is an implicit requirement of stabilizing `const_ptr_write`.

Const-stabilizes the internal `core::intrinsics`:
- `forget`
2024-10-12 00:02:09 -07:00
Trevor Gross
3e16b77465
Rollup merge of #131289 - RalfJung:duration_consts_float, r=tgross35
stabilize duration_consts_float

Waiting for FCP in https://github.com/rust-lang/rust/issues/72440 to pass.

`as_millis_f32` and `as_millis_f64` are not stable at all yet, so I moved their const-stability together with their regular stability (tracked at https://github.com/rust-lang/rust/issues/122451).

Fixes https://github.com/rust-lang/rust/issues/72440
2024-10-11 23:57:45 -04:00
Trevor Gross
02cf62c596
Rollup merge of #130962 - nyurik:opts-libs, r=cuviper
Migrate lib's `&Option<T>` into `Option<&T>`

Trying out my new lint https://github.com/rust-lang/rust-clippy/pull/13336 - according to the [video](https://www.youtube.com/watch?v=6c7pZYP_iIE), this could lead to some performance and memory optimizations.

Basic thoughts expressed in the video that seem to make sense:
* `&Option<T>` in an API breaks encapsulation:
  * caller must own T and move it into an Option to call with it
  * if returned, the owner must store it as Option<T> internally in order to return it
* Performance is subject to compiler optimization, but at the basics, `&Option<T>` points to memory that has `presence` flag + value, whereas `Option<&T>` by specification is always optimized to a single pointer.
2024-10-11 23:57:44 -04:00
Trevor Gross
3f9aa50b70
Rollup merge of #124874 - jedbrown:float-mul-add-fast, r=saethlin
intrinsics fmuladdf{32,64}: expose llvm.fmuladd.* semantics

Add intrinsics `fmuladd{f32,f64}`. This computes `(a * b) + c`, to be fused if the code generator determines that (i) the target instruction set has support for a fused operation, and (ii) that the fused operation is more efficient than the equivalent, separate pair of `mul` and `add` instructions.

https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic

The codegen_cranelift uses the `fma` function from libc, which is a correct implementation, but without the desired performance semantic. I think this requires an update to cranelift to expose a suitable instruction in its IR.

I have not tested with codegen_gcc, but it should behave the same way (using `fma` from libc).

---
This topic has been discussed a few times on Zulip and was suggested, for example, by `@workingjubilee` in [Effect of fma disabled](https://rust-lang.zulipchat.com/#narrow/stream/122651-general/topic/Effect.20of.20fma.20disabled/near/274179331).
2024-10-11 23:57:44 -04:00
Josh Stone
5365b3f7be Avoid superfluous UB checks in IndexRange
`IndexRange::len` is justified as an overall invariant, and
`take_prefix` and `take_suffix` are justified by local branch
conditions. A few more UB-checked calls remain in cases that are only
supported locally by `debug_assert!`, which won't do anything in
distributed builds, so those UB checks may still be useful.

We generally expect core's `#![rustc_preserve_ub_checks]` to optimize
away in user's release builds, but the mere presence of that extra code
can sometimes inhibit optimization, as seen in #131563.
2024-10-11 16:22:43 -07:00
Trevor Gross
8ea41b903f
Rollup merge of #131463 - bjoernager:const-char-encode-utf8, r=RalfJung
Stabilise `const_char_encode_utf8`.

Closes: #130512

This PR stabilises the `const_char_encode_utf8` feature gate (i.e. support for `char::encode_utf8` in const scenarios).

Note that the linked tracking issue is currently awaiting FCP.
2024-10-11 16:53:49 -05:00
Trevor Gross
622fc5e0f3
Rollup merge of #131287 - RalfJung:const_result, r=tgross35
stabilize const_result

Waiting for FCP to complete in https://github.com/rust-lang/rust/issues/82814

Fixes #82814
2024-10-11 16:53:48 -05:00
Trevor Gross
8797cfed68
Rollup merge of #131109 - tgross35:stabilize-debug_more_non_exhaustive, r=joboet
Stabilize `debug_more_non_exhaustive`

Fixes: https://github.com/rust-lang/rust/issues/127942
2024-10-11 16:53:47 -05:00
Trevor Gross
f241d0a230
Rollup merge of #131065 - Voultapher:port-sort-test-suite, r=thomcc
Port sort-research-rs test suite to Rust stdlib tests

This PR is a followup to https://github.com/rust-lang/rust/pull/124032. It replaces the tests that test the various sort functions in the standard library with a test-suite developed as part of https://github.com/Voultapher/sort-research-rs. The current tests suffer a couple of problems:

- They don't cover important real world patterns that the implementations take advantage of and execute special code for.
- The input lengths tested miss out on code paths. For example, important safety property tests never reach the quicksort part of the implementation.
- The miri side is often limited to `len <= 20` which means it very thoroughly tests the insertion sort, which accounts for 19 out of 1.5k LoC.
- They are split into to core and alloc, causing code duplication and uneven coverage.
- ~~The randomness is tied to a caller location, wasting the space exploration capabilities of randomized testing.~~ The randomness is not repeatable, as it relies on `std:#️⃣:RandomState::new().build_hasher()`.

Most of these issues existed before https://github.com/rust-lang/rust/pull/124032, but they are intensified by it. One thing that is new and requires additional testing, is that the new sort implementations specialize based on type properties. For example `Freeze` and non `Freeze` execute different code paths.

Effectively there are three dimensions that matter:

- Input type
- Input length
- Input pattern

The ported test-suite tests various properties along all three dimensions, greatly improving test coverage. It side-steps the miri issue by preferring sampled approaches. For example the test that checks if after a panic the set of elements is still the original one, doesn't do so for every single possible panic opportunity but rather it picks one at random, and performs this test across a range of input length, which varies the panic point across them. This allows regular execution to easily test inputs of length 10k, and miri execution up to 100 which covers significantly more code. The randomness used is tied to a fixed - but random per process execution - seed. This allows for fully repeatable tests and fuzzer like exploration across multiple runs.

Structure wise, the tests are previously found in the core integration tests for `sort_unstable` and alloc unit tests for `sort`. The new test-suite was developed to be a purely black-box approach, which makes integration testing the better place, because it can't accidentally rely on internal access. Because unwinding support is required the tests can't be in core, even if the implementation is, so they are now part of the alloc integration tests. Are there architectures that can only build and test core and not alloc? If so, do such platforms require sort testing? For what it's worth the current implementation state passes miri `--target mips64-unknown-linux-gnuabi64` which is big endian.

The test-suite also contains tests for properties that were and are given by the current and previous implementations, and likely relied upon by users but weren't tested. For example `self_cmp` tests that the two parameters `a` and `b` passed into the comparison function are never references to the same object, which if the user is sorting for example a `&mut [Mutex<i32>]` could lead to a deadlock.

Instead of using the hashed caller location as rand seed, it uses seconds since unix epoch / 10, which given timestamps in the CI should be reasonably easy to reproduce, but also allows fuzzer like space exploration.

---

Test run-time changes:

Setup:

```
Linux 6.10
rustc 1.83.0-nightly (f79a912d9 2024-09-18)
AMD Ryzen 9 5900X 12-Core Processor (Zen 3 micro-architecture)
CPU boost enabled.
```

master: e9df22f

Before core integration tests:

```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/coretests-219cbd0308a49e2f
  Time (mean ± σ):     869.6 ms ±  21.1 ms    [User: 1327.6 ms, System: 95.1 ms]
  Range (min … max):   845.4 ms … 917.0 ms    10 runs

# MIRIFLAGS="-Zmiri-disable-isolation" to get real time
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/core
  finished in 738.44s
```

After core integration tests:

```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/coretests-219cbd0308a49e2f
  Time (mean ± σ):     865.1 ms ±  14.7 ms    [User: 1283.5 ms, System: 88.4 ms]
  Range (min … max):   836.2 ms … 885.7 ms    10 runs

$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/core
  finished in 752.35s
```

Before alloc unit tests:

```
LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloc-19c15e6e8565aa54
  Time (mean ± σ):     295.0 ms ±   9.9 ms    [User: 719.6 ms, System: 35.3 ms]
  Range (min … max):   284.9 ms … 319.3 ms    10 runs

$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
  finished in 322.75s
```

After alloc unit tests:

```
LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloc-19c15e6e8565aa54
  Time (mean ± σ):      97.4 ms ±   4.1 ms    [User: 297.7 ms, System: 28.6 ms]
  Range (min … max):    92.3 ms … 109.2 ms    27 runs

$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
  finished in 309.18s
```

Before alloc integration tests:

```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloctests-439e7300c61a8046
  Time (mean ± σ):     103.2 ms ±   1.7 ms    [User: 135.7 ms, System: 39.4 ms]
  Range (min … max):    99.7 ms … 107.3 ms    28 runs

$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
  finished in 231.35s
```

After alloc integration tests:

```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloctests-439e7300c61a8046
  Time (mean ± σ):     379.8 ms ±   4.7 ms    [User: 4620.5 ms, System: 1157.2 ms]
  Range (min … max):   373.6 ms … 386.9 ms    10 runs

$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
  finished in 449.24s
```

In my opinion the results don't change iterative library development or CI execution in meaningful ways. For example currently the library doc-tests take ~66s and incremental compilation takes 10+ seconds. However I only have limited knowledge of the various local development workflows that exist, and might be missing one that is significantly impacted by this change.
2024-10-11 16:53:47 -05:00
Jed Brown
0d8a978e8a intrinsics.fmuladdf{16,32,64,128}: expose llvm.fmuladd.* semantics
Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) +
c`, to be fused if the code generator determines that (i) the target
instruction set has support for a fused operation, and (ii) that the
fused operation is more efficient than the equivalent, separate pair
of `mul` and `add` instructions.

https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic

MIRI support is included for f32 and f64.

The codegen_cranelift uses the `fma` function from libc, which is a
correct implementation, but without the desired performance semantic. I
think this requires an update to cranelift to expose a suitable
instruction in its IR.

I have not tested with codegen_gcc, but it should behave the same
way (using `fma` from libc).
2024-10-11 15:32:56 -06:00
Manuel Drehwald
624c071b99 Single commit implementing the enzyme/autodiff frontend
Co-authored-by: Lorenz Schmidt <bytesnake@mailbox.org>
2024-10-11 19:13:31 +02:00
Ralf Jung
92f65684a8 stabilize const_result 2024-10-11 18:34:28 +02:00
Ralf Jung
181e667626 stabilize duration_consts_float 2024-10-11 18:23:30 +02:00
Matthias Krüger
cac36288b5
Rollup merge of #131512 - j7nw4r:master, r=jhpratt
Fixing rustDoc for LayoutError.

I started reading the the std lib from start to finish and noticed that this rustdoc comment wasn't correct.
2024-10-11 12:21:08 +02:00
Jonathan Dönszelmann
0a9c87b1f5
rename RcBox in other places too 2024-10-11 10:04:22 +02:00
Jonathan Dönszelmann
159e67d446
rename RcBox to RcInner for consistency 2024-10-11 00:14:17 +02:00
Johnathan W
8b754fbb4f Fixing rustDoc for LayoutError. 2024-10-10 16:18:56 -04:00
Matthias Krüger
edb669350a
Rollup merge of #130741 - mrkajetanp:detect-b16b16, r=Amanieu
rustc_target: Add sme-b16b16 as an explicit aarch64 target feature

LLVM 20 split out what used to be called b16b16 and correspond to aarch64
FEAT_SVE_B16B16 into sve-b16b16 and sme-b16b16.
Add sme-b16b16 as an explicit feature and update the codegen accordingly.

Resolves https://github.com/rust-lang/rust/pull/129894.
2024-10-10 22:00:48 +02:00
Matthias Krüger
9237937cf0
Rollup merge of #130538 - ultrabear:ultrabear_const_from_ref, r=workingjubilee
Stabilize const `{slice,array}::from_mut`

This PR stabilizes the following APIs as const stable as of rust `1.83`:
```rs
// core::array
pub const fn from_mut<T>(s: &mut T) -> &mut [T; 1];

// core::slice
pub const fn from_mut<T>(s: &mut T) -> &mut [T];
```
This is made possible by `const_mut_refs` being stabilized (yay).

Tracking issue: #90206
2024-10-10 22:00:47 +02:00
Tim (Theemathas) Chirananthavat
203573701a More clearly document Stdin::read_line
These are common pitfalls for beginners, so I think it's worth
making the subtleties more visible.
2024-10-10 23:12:03 +07:00
Gabriel Bjørnager Jensen
00f9827599 Stabilise 'const_char_encode_utf8'; 2024-10-10 16:33:27 +02:00
Joshua Wong
5e474f7d83 allocate before calling T::default in <Arc<T>>::default()
Same rationale as in the previous commit.
2024-10-10 09:50:35 -04:00
Joshua Wong
dd0620b867 allocate before calling T::default in <Box<T>>::default()
The `Box<T: Default>` impl currently calls `T::default()` before allocating
the `Box`.

Most `Default` impls are trivial, which should in theory allow
LLVM to construct `T: Default` directly in the `Box` allocation when calling
`<Box<T>>::default()`.

However, the allocation may fail, which necessitates calling `T's` destructor if it has one.
If the destructor is non-trivial, then LLVM has a hard time proving that it's
sound to elide, which makes it construct `T` on the stack first, and then copy it into the allocation.

Create an uninit `Box` first, and then write `T::default` into it, so that LLVM now only needs to prove
that the `T::default` can't panic, which should be trivial for most `Default` impls.
2024-10-10 09:49:24 -04:00
Kajetan Puchalski
335f67b652 rustc_target: Add sme-b16b16 as an explicit aarch64 target feature
LLVM 20 split out what used to be called b16b16 and correspond to aarch64
FEAT_SVE_B16B16 into sve-b16b16 and sme-b16b16.
Add sme-b16b16 as an explicit feature and update the codegen accordingly.
2024-10-10 10:24:57 +00:00
Kajetan Puchalski
2900c58a02 stdarch: Bump stdarch submodule 2024-10-10 10:16:16 +00:00
Ben Kimock
aec09a43ef Clean up is_aligned_and_not_null 2024-10-09 19:34:27 -04:00
Ben Kimock
84dacc1882 Add more precondition check tests 2024-10-09 19:34:27 -04:00
Ben Kimock
0c41c3414c Allow zero-size reads/writes on null pointers 2024-10-09 19:34:27 -04:00
ltdk
6524acf04b Optimize escape_ascii 2024-10-09 17:17:50 -04:00
Matthias Krüger
7a76489454
Rollup merge of #131462 - cuviper:open_buffered-error, r=RalfJung
Mention allocation errors for `open_buffered`

This documents that `File::open_buffered` may return an error on allocation failure.
2024-10-09 23:03:50 +02:00
Matthias Krüger
866869bbbd
Rollup merge of #131449 - nickrum:wasip2-net-decouple-fd, r=alexcrichton
Decouple WASIp2 sockets from WasiFd

This is a follow up to #129638, decoupling WASIp2's socket implementation from WASIp1's `WasiFd` as discussed with `@alexcrichton.`

Quite a few trait implementations in `std::os::fd` rely on the fact that there is an additional layer of abstraction between `Socket` and `OwnedFd`. I thus had to add a thin `WasiSocket` wrapper struct that just "forwards" to `OwnedFd`. Alternatively, I could have added a lot of conditional compilation to `std::os::fd`, which feels even worse.

Since `WasiFd::sock_accept` is no longer accessible from `TcpListener` and since WASIp2 has proper support for accepting sockets through `Socket::accept`, the `std::os::wasi::net` module has been removed from WASIp2, which only contains a single `TcpListenerExt` trait with a `sock_accept` method as well as an implementation for `TcpListener`. Let me know if this is an acceptable solution.
2024-10-09 23:03:50 +02:00
Matthias Krüger
d58345010c
Rollup merge of #131383 - AngelicosPhosphoros:better_doc_for_slice_slicing_at_ends, r=cuviper
Add docs about slicing slices at the ends

Closes https://github.com/rust-lang/rust/issues/60783
2024-10-09 23:03:48 +02:00