Commit graph

133 commits

Author SHA1 Message Date
Yuki Okushi
21bfe529c7
Rollup merge of #75270 - matthiaskrgr:clippy_aug_1, r=Dylan-DPC
fix a couple of clippy findings
2020-08-08 11:36:12 +09:00
Yuki Okushi
02bf036c6c
Rollup merge of #75253 - RalfJung:cleanup-const-hack, r=oli-obk
clean up const-hacks in int endianess conversion functions

Cleans up the const hacks added in https://github.com/rust-lang/rust/pull/69373.

r? @oli-obk
2020-08-08 11:36:07 +09:00
Yuki Okushi
b032a15fa8
Rollup merge of #75250 - RalfJung:uninit-const-ptr, r=oli-obk
make MaybeUninit::as_(mut_)ptr const

I think it was just an oversight that they are not const yet.

I also changed their implementation as the old one created references to uninitialized memory.^^
2020-08-08 11:36:05 +09:00
bors
f3a9de9b08 Auto merge of #75048 - eggyal:force-no-tco-start-backtrace-frame, r=Mark-Simulacrum
Prevent `__rust_begin_short_backtrace` frames from being tail-call optimised away

I've stumbled across some situations where there (unexpectedly) was no `__rust_begin_short_backtrace` frame on the stack during unwinding.

On closer examination, it appeared that the calls to that function had been tail-call optimised away.

This PR follows [@bjorn3's suggestion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Disabling.20tail.20call.20optimisation.3F/near/205699133), by adding calls to `black_box` that hint to rustc not to perform TCO.

Fixes #47429
2020-08-08 00:00:52 +00:00
Matthias Krüger
a605e51056 fix clippy::needless_return: remove unneeded return statements 2020-08-08 00:57:37 +02:00
bors
c2d1b0d980 Auto merge of #75071 - ssomers:btree_cleanup_5, r=Mark-Simulacrum
BTreeMap: enforce the panic rule imposed by `replace`

Also, reveal the unsafe parts in the closures fed to it.

r? @Mark-Simulacrum
2020-08-07 21:48:32 +00:00
Alan Egerton
5792840bf5 Prevent __rust_begin_short_backtrace frames from being tail-call optimised away 2020-08-07 19:31:25 +01:00
Stein Somers
734fc0477c BTreeMap: enforce the panic rule imposed by replace 2020-08-07 19:51:26 +02:00
Ralf Jung
a530934951 clean up const-hacks in int endianess conversion functions 2020-08-07 13:45:55 +02:00
Ralf Jung
ec5d78d350 fix feature gate and tracking issue 2020-08-07 12:38:55 +02:00
Ralf Jung
0aee186723 make MaybeUninit::as_(mut_)ptr const 2020-08-07 12:24:28 +02:00
bors
8b26609481 Auto merge of #70052 - Amanieu:hashbrown7, r=Mark-Simulacrum
Update hashbrown to 0.8.1

This update includes:
- https://github.com/rust-lang/hashbrown/pull/146, which improves the performance of `Clone` and implements `clone_from`.
- https://github.com/rust-lang/hashbrown/pull/159, which reduces the size of `HashMap` by 8 bytes.
- https://github.com/rust-lang/hashbrown/pull/162, which avoids creating small 1-element tables.

Fixes #28481
2020-08-07 08:36:15 +00:00
bors
d4c940f082 Auto merge of #75244 - Manishearth:rollup-dzfyjva, r=Manishearth
Rollup of 4 pull requests

Successful merges:

 - #74774 (adds [*mut|*const] ptr::set_ptr_value)
 - #75079 (Disallow linking to items with a mismatched disambiguator)
 - #75203 (Make `IntoIterator` lifetime bounds of `&BTreeMap` match with `&HashMap` )
 - #75227 (Fix ICE when using asm! on an unsupported architecture)

Failed merges:

r? @ghost
2020-08-07 06:40:53 +00:00
Manish Goregaokar
25c8e9ac17
Rollup merge of #75203 - canova:btreemap-into-iter, r=dtolnay
Make `IntoIterator` lifetime bounds of `&BTreeMap` match with `&HashMap`

This is a pretty small change on the lifetime bounds of `IntoIterator` implementations of both `&BTreeMap` and `&mut BTreeMap`. This is loosening the lifetime bounds, so more code should be accepted with this PR. This is lifetime bounds will still be implicit since we have `type Item = (&'a K, &'a V);` in the implementation. This change will make the HashMap and BTreeMap share the same signature, so we can share the same function/trait with both HashMap and BTreeMap in the code.

Fixes #74034.
r? @dtolnay hey, I was touching this file on my previous PR and wanted to fix this on the way. Would you mind taking a look at this, or redirecting it if you are busy?
2020-08-06 23:04:05 -07:00
Manish Goregaokar
5f331c0585
Rollup merge of #74774 - oliver-giersch:set_data_ptr, r=dtolnay
adds [*mut|*const] ptr::set_ptr_value

I propose the addition of these two functions to `*mut T` and `*const T`, respectively. The motivation for this is primarily byte-wise pointer arithmetic on (potentially) fat pointers, i.e. for types with a `T: ?Sized` bound. A concrete use-case has been discussed in [this](https://internals.rust-lang.org/t/byte-wise-fat-pointer-arithmetic/12739) thread.
TL;DR: Currently, byte-wise pointer arithmetic with potentially fat pointers in not possible in either stable or nightly Rust without making assumptions about the layout of fat pointers, which is currently still an implementation detail and not formally stabilized. This PR adds one function to `*mut T` and `*const T` each, allowing to circumvent this restriction without exposing any internal implementation details.
One possible alternative would be to add specific byte-wise pointer arithmetic functions to the two pointer types in addition to the already existing count-wise functions. However, I feel this fairly niche use case does not warrant adding a whole set of new functions like `add_bytes`, `offset_bytes`, `wrapping_offset_bytes`, etc. (times two, one for each pointer type) to `libcore`.
2020-08-06 23:04:02 -07:00
Amanieu d'Antras
d51b7b229a Update hashbrown to 0.8.1 2020-08-07 07:03:12 +01:00
bors
98922795f6 Auto merge of #75121 - tmiasko:str-slicing, r=Mark-Simulacrum
Avoid `unwrap_or_else` in str indexing

This provides a small reduction of generated LLVM IR, and leads to a
simpler assembly code.

Closes #68874.
2020-08-07 04:51:04 +00:00
Yuki Okushi
26705d5bcb
Rollup merge of #75211 - lzutao:native-endian-notes, r=lcnr
Note about endianness of returned value of {integer}::from_be_bytes and friends

[`u32::from_be`](https://doc.rust-lang.org/nightly/src/core/num/mod.rs.html#2883-2892) documents about endianness of returned value.

I was confused by endianness of `from_be_bytes` in #75086 .
2020-08-07 09:35:26 +09:00
Yuki Okushi
1b61fd3ccf
Rollup merge of #75179 - lzutao:unsed-ipv4-frominner, r=alexcrichton
Remove unused FromInner impl for Ipv4Addr

The removed is a unused unstable implementation.
2020-08-07 09:35:16 +09:00
Yuki Okushi
c9c7048038
Rollup merge of #75175 - lzutao:doctest-ipv4-fromu32, r=cuviper
Make doctests of Ipv4Addr::from(u32) easier to read

There are many zeroes in `0x0d0c0b0au32` which makes it hard to read.
2020-08-07 09:35:14 +09:00
Lzu Tao
eff7d568d8 Note about endianness of returned value
in {integer}::from_be_bytes and friends.
2020-08-06 07:33:07 +00:00
Tomasz Miąsko
888bc07c6b Keep stdout open in limit_vector_count test 2020-08-06 00:00:00 +00:00
bors
c15bae53b5 Auto merge of #75086 - lzutao:u32const, r=oli-obk
Use u32::from_ne_bytes to fix a FIXME and add comment about that

`u32::from_ne_bytes` has been const stable since 1.44.
2020-08-06 14:21:48 +00:00
Nazım Can Altınova
62e06a4d09
Make IntoIterator lifetime bounds of &BTreeMap match with &HashMap 2020-08-05 23:32:13 +02:00
Adam Reichold
9073acdc98 Add fallback for cfg(unix) targets that do not define libc::_SC_IOV_MAX. 2020-08-05 17:15:08 +02:00
Adam Reichold
04a0114e7e Rely only on POSIX semantics for I/O vector count
All #[cfg(unix)] platforms follow the POSIX standard and define _SC_IOV_MAX so
that we rely purely on POSIX semantics to determine the limits on I/O vector
count.
2020-08-05 16:57:02 +02:00
Adam Reichold
87edccf0f0 Reduce synchronization overhead of I/O vector count memoization 2020-08-05 16:57:02 +02:00
Adam Reichold
6672f7be03 Memoize the I/O vector count limit
Keep the I/O vector count limit in a `SyncOnceCell` to avoid the overhead of
repeatedly calling `sysconf` as these limits are guaranteed to not change during
the lifetime of a process by POSIX.
2020-08-05 16:57:02 +02:00
Adam Reichold
9468752581 Query maximum vector count on Linux and macOS
Both Linux and MacOS enforce limits on the vector count when performing vectored
I/O via the readv and writev system calls and return EINVAL when these limits
are exceeded. This changes the standard library to handle those limits as short
reads and writes to avoid forcing its users to query these limits using
platform specific mechanisms.
2020-08-05 16:57:02 +02:00
Lzu Tao
d9f260e95e Remove unused FromInner impl for Ipv4Addr 2020-08-05 05:53:07 +00:00
Lzu Tao
725d37cae0 Make doctests of Ipv4Addr::from(u32) easier to read 2020-08-05 05:31:17 +00:00
bors
dab2ae0404 Auto merge of #75037 - richkadel:llvm-coverage-map-gen-5.2, r=wesleywiser
Completes support for coverage in external crates

Follow-up to #74959 :

The prior PR corrected for errors encountered when trying to generate
the coverage map on source code inlined from external crates (including
macros and generics) by avoiding adding external DefIds to the coverage
map.

This made it possible to generate a coverage report including external
crates, but the external crate coverage was incomplete (did not include
coverage for the DefIds that were eliminated.

The root issue was that the coverage map was converting Span locations
to source file and locations, using the SourceMap for the current crate,
and this would not work for spans from external crates (compliled with a
different SourceMap).

The solution was to convert the Spans to filename and location during
MIR generation instead, so precompiled external crates would already
have the correct source code locations embedded in their MIR, when
imported into another crate.

@wesleywiser FYI
r? @tmandry
2020-08-05 05:08:19 +00:00
Lzu Tao
30a1455c8d Use u32::from_ne_bytes to fix a FIXME
Co-authored-by: Weiyi Wang <wwylele@gmail.com>
Co-authored-by: Adam Reichold <adam.reichold@t-online.de>
Co-authored-by: Josh Stone <cuviper@gmail.com>
Co-authored-by: Scott McMurray <scottmcm@users.noreply.github.com>
Co-authored-by: tmiasko <tomasz.miasko@gmail.com>
2020-08-05 02:49:26 +00:00
Rich Kadel
e0dc8dec27 Completes support for coverage in external crates
The prior PR corrected for errors encountered when trying to generate
the coverage map on source code inlined from external crates (including
macros and generics) by avoiding adding external DefIds to the coverage
map.

This made it possible to generate a coverage report including external
crates, but the external crate coverage was incomplete (did not include
coverage for the DefIds that were eliminated.

The root issue was that the coverage map was converting Span locations
to source file and locations, using the SourceMap for the current crate,
and this would not work for spans from external crates (compliled with a
different SourceMap).

The solution was to convert the Spans to filename and location during
MIR generation instead, so precompiled external crates would already
have the correct source code locations embedded in their MIR, when
imported into another crate.
2020-08-04 11:06:54 -07:00
Tim Diekmann
93d98328d1
Revert missing "memory block" 2020-08-04 19:24:08 +02:00
Tim Diekmann
929e37d4bf Revert renaming of "memory block" 2020-08-04 19:15:48 +02:00
Tim Diekmann
ab9362ad9a Replace Memoryblock with NonNull<[u8]> 2020-08-04 18:03:34 +02:00
bors
5f6bd6ec0a Auto merge of #74850 - TimDiekmann:remove-in-place-alloc, r=Amanieu
Remove in-place allocation and revert to separate methods for zeroed allocations

closes rust-lang/wg-allocators#58
2020-08-04 11:22:45 +00:00
bors
80f84eb9c6 Auto merge of #75058 - ssomers:btree_cleanup_insert_2, r=Mark-Simulacrum
Clarify reuse of a BTreeMap insert support function and treat split support likewise

r? @Mark-Simulacrum
2020-08-04 03:48:48 +00:00
Yuki Okushi
622759d129
Rollup merge of #75084 - Aaron1011:stabilize/ident-new-raw, r=petrochenkov
Stabilize Ident::new_raw

Tracking issue: #54723

This is a continuation of PR #59002
2020-08-04 09:27:06 +09:00
Yuki Okushi
cc0ac7eece
Rollup merge of #74759 - carbotaniuman:uabs, r=shepmaster
add `unsigned_abs` to signed integers

Mentioned on rust-lang/rfcs#2914

This PR simply adds an `unsigned_abs` to signed integers function which returns the correct absolute value as a unsigned integer.
2020-08-04 09:26:58 +09:00
Tim Diekmann
6395659168
Apply suggestions from code review
Co-authored-by: Amanieu d'Antras <amanieu@gmail.com>
2020-08-04 00:21:05 +02:00
Tomasz Miąsko
427634b503 Avoid unwrap_or_else in str indexing
This provides a small reduction of generated LLVM IR, and leads to a
simpler assembly code.
2020-08-04 00:01:48 +02:00
bors
d8cbd9caca Auto merge of #74526 - erikdesjardins:reftrack, r=Mark-Simulacrum
Add track_caller to RefCell::{borrow, borrow_mut}

So panic messages point at the offending borrow.

Fixes #74472
2020-08-03 21:43:27 +00:00
Aaron Hill
6deda6a6a0
Stabilize Ident::new_raw
Tracking issue: #54723

This is a continuation of PR #59002
2020-08-03 17:23:31 -04:00
bors
829d69b9c6 Auto merge of #74827 - ssomers:btree_cleanup_insert, r=Mark-Simulacrum
Move bulk of BTreeMap::insert method down to new method on handle

Adjust the boundary between the map and node layers for insertion: do more in the node layer, keep root manipulation and pointer dereferencing separate. No change in undefined behaviour or performance.

r? @Mark-Simulacrum
2020-08-03 15:46:02 +00:00
oliver-giersch
6c81556a36
adds [*mut|*const] ptr::set_ptr_value 2020-08-03 04:17:45 -07:00
kennytm
fd7596c9a5
fix broken git commit in stdarch 2020-08-03 15:52:30 +08:00
Tim Diekmann
24ddf76ed7
Merge branch 'master' into remove-in-place-alloc 2020-08-03 02:18:20 +02:00
bors
19ecce332e Auto merge of #74948 - lzutao:stalize-result-as-deref, r=dtolnay
Stabilize `Result::as_deref` and `as_deref_mut`

FCP completed in https://github.com/rust-lang/rust/issues/50264#issuecomment-645681400.

This PR stabilizes two new APIs for `std::result::Result`:
```rust
fn as_deref(&self) -> Result<&T::Target, &E> where T: Deref;
fn as_deref_mut(&mut self) -> Result<&mut T::Target, &mut E> where T: DerefMut;
```

This PR also removes two rarely used unstable APIs from `Result`:
```rust
fn as_deref_err(&self) -> Result<&T, &E::Target> where E: Deref;
fn as_deref_mut_err(&mut self) -> Result<&mut T, &mut E::Target> where E: DerefMut;
```

Closes #50264
2020-08-02 23:55:12 +00:00