Auto merge of #53530 - kennytm:rollup, r=kennytm
Rollup of 17 pull requests Successful merges: - #53030 (Updated RELEASES.md for 1.29.0) - #53104 (expand the documentation on the `Unpin` trait) - #53213 (Stabilize IP associated constants) - #53296 (When closure with no arguments was expected, suggest wrapping) - #53329 (Replace usages of ptr::offset with ptr::{add,sub}.) - #53363 (add individual docs to `core::num::NonZero*`) - #53370 (Stabilize macro_vis_matcher) - #53393 (Mark libserialize functions as inline) - #53405 (restore the page title after escaping out of a search) - #53452 (Change target triple used to check for lldb in build-manifest) - #53462 (Document Box::into_raw returns non-null ptr) - #53465 (Remove LinkMeta struct) - #53492 (update lld submodule to include RISCV patch) - #53496 (Fix typos found by codespell.) - #53521 (syntax: Optimize some literal parsing) - #53540 (Moved issue-53157.rs into src/test/ui/consts/const-eval/) - #53551 (Avoid some Place clones.) Failed merges: r? @ghost
This commit is contained in:
commit
9f9f2c0095
183 changed files with 571 additions and 510 deletions
75
RELEASES.md
75
RELEASES.md
|
@ -1,3 +1,78 @@
|
|||
Version 1.29.0 (2018-09-13)
|
||||
==========================
|
||||
|
||||
Compiler
|
||||
--------
|
||||
- [Bumped minimum LLVM version to 5.0.][51899]
|
||||
- [Added `powerpc64le-unknown-linux-musl` target.][51619]
|
||||
- [Added `aarch64-unknown-hermit` and `x86_64-unknown-hermit` targets.][52861]
|
||||
|
||||
Libraries
|
||||
---------
|
||||
- [`Once::call_once` now no longer requires `Once` to be `'static`.][52239]
|
||||
- [`BuildHasherDefault` now implements `PartialEq` and `Eq`.][52402]
|
||||
- [`Box<CStr>`, `Box<OsStr>`, and `Box<Path>` now implement `Clone`.][51912]
|
||||
- [Implemented `PartialEq<&str>` for `OsString` and `PartialEq<OsString>`
|
||||
for `&str`.][51178]
|
||||
- [`Cell<T>` now allows `T` to be unsized.][50494]
|
||||
- [`SocketAddr` is now stable on Redox.][52656]
|
||||
|
||||
Stabilized APIs
|
||||
---------------
|
||||
- [`Arc::downcast`]
|
||||
- [`Iterator::flatten`]
|
||||
- [`Rc::downcast`]
|
||||
|
||||
Cargo
|
||||
-----
|
||||
- [Cargo can silently fix some bad lockfiles ][cargo/5831] You can use
|
||||
`--locked` to disable this behaviour.
|
||||
- [`cargo-install` will now allow you to cross compile an install
|
||||
using `--target`][cargo/5614]
|
||||
- [Added the `cargo-fix` subcommand to automatically move project code from
|
||||
2015 edition to 2018.][cargo/5723]
|
||||
|
||||
Misc
|
||||
----
|
||||
- [`rustdoc` now has the `--cap-lints` option which demotes all lints above
|
||||
the specified level to that level.][52354] For example `--cap-lints warn`
|
||||
will demote `deny` and `forbid` lints to `warn`.
|
||||
- [`rustc` and `rustdoc` will now have the exit code of `1` if compilation
|
||||
fails, and `101` if there is a panic.][52197]
|
||||
|
||||
Compatibility Notes
|
||||
-------------------
|
||||
- [`str::{slice_unchecked, slice_unchecked_mut}` are now deprecated.][51807]
|
||||
Use `str::get_unchecked(begin..end)` instead.
|
||||
- [`std::env::home_dir` is now deprecated for its unintuitive behaviour.][51656]
|
||||
Consider using the `home_dir` function from
|
||||
https://crates.io/crates/dirs instead.
|
||||
- [`rustc` will no longer silently ignore invalid data in target spec.][52330]
|
||||
|
||||
[52861]: https://github.com/rust-lang/rust/pull/52861/
|
||||
[52656]: https://github.com/rust-lang/rust/pull/52656/
|
||||
[52239]: https://github.com/rust-lang/rust/pull/52239/
|
||||
[52330]: https://github.com/rust-lang/rust/pull/52330/
|
||||
[52354]: https://github.com/rust-lang/rust/pull/52354/
|
||||
[52402]: https://github.com/rust-lang/rust/pull/52402/
|
||||
[52103]: https://github.com/rust-lang/rust/pull/52103/
|
||||
[52197]: https://github.com/rust-lang/rust/pull/52197/
|
||||
[51807]: https://github.com/rust-lang/rust/pull/51807/
|
||||
[51899]: https://github.com/rust-lang/rust/pull/51899/
|
||||
[51912]: https://github.com/rust-lang/rust/pull/51912/
|
||||
[51511]: https://github.com/rust-lang/rust/pull/51511/
|
||||
[51619]: https://github.com/rust-lang/rust/pull/51619/
|
||||
[51656]: https://github.com/rust-lang/rust/pull/51656/
|
||||
[51178]: https://github.com/rust-lang/rust/pull/51178/
|
||||
[50494]: https://github.com/rust-lang/rust/pull/50494/
|
||||
[cargo/5614]: https://github.com/rust-lang/cargo/pull/5614/
|
||||
[cargo/5723]: https://github.com/rust-lang/cargo/pull/5723/
|
||||
[cargo/5831]: https://github.com/rust-lang/cargo/pull/5831/
|
||||
[`Arc::downcast`]: https://doc.rust-lang.org/std/sync/struct.Arc.html#method.downcast
|
||||
[`Iterator::flatten`]: https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.flatten
|
||||
[`Rc::downcast`]: https://doc.rust-lang.org/std/rc/struct.Rc.html#method.downcast
|
||||
|
||||
|
||||
Version 1.28.0 (2018-08-02)
|
||||
===========================
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ shift
|
|||
|
||||
export CFLAGS="-fPIC $CFLAGS"
|
||||
|
||||
# FIXME: remove the patch when upate to 1.1.20
|
||||
# FIXME: remove the patch when updating to 1.1.20
|
||||
MUSL=musl-1.1.19
|
||||
|
||||
# may have been downloaded in a previous run
|
||||
|
|
|
@ -34,7 +34,7 @@ minimum. It also includes exercises!
|
|||
|
||||
# Use Rust
|
||||
|
||||
Once you've gotten familliar with the language, these resources can help you
|
||||
Once you've gotten familiar with the language, these resources can help you
|
||||
when you're actually using it day-to-day.
|
||||
|
||||
## The Standard Library
|
||||
|
|
|
@ -153,7 +153,7 @@ This option allows you to put extra data in each output filename.
|
|||
This flag lets you control how many threads are used when doing
|
||||
code generation.
|
||||
|
||||
Increasing paralellism may speed up compile times, but may also
|
||||
Increasing parallelism may speed up compile times, but may also
|
||||
produce slower code.
|
||||
|
||||
## remark
|
||||
|
|
|
@ -56,7 +56,7 @@ mod m {
|
|||
pub struct S(u8);
|
||||
|
||||
fn f() {
|
||||
// this is trying to use S from the 'use' line, but becuase the `u8` is
|
||||
// this is trying to use S from the 'use' line, but because the `u8` is
|
||||
// not pub, it is private
|
||||
::S;
|
||||
}
|
||||
|
@ -103,7 +103,7 @@ This warning can always be fixed by removing the unused pattern in the
|
|||
|
||||
## mutable-transmutes
|
||||
|
||||
This lint catches transmuting from `&T` to `&mut T` becuase it is undefined
|
||||
This lint catches transmuting from `&T` to `&mut T` because it is undefined
|
||||
behavior. Some example code that triggers this lint:
|
||||
|
||||
```rust,ignore
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Unstable features
|
||||
|
||||
Rustdoc is under active developement, and like the Rust compiler, some features are only available
|
||||
Rustdoc is under active development, and like the Rust compiler, some features are only available
|
||||
on the nightly releases. Some of these are new and need some more testing before they're able to get
|
||||
released to the world at large, and some of them are tied to features in the Rust compiler that are
|
||||
themselves unstable. Several features here require a matching `#![feature(...)]` attribute to
|
||||
|
|
|
@ -6,12 +6,12 @@ The tracking issue for this feature is: [#44493]
|
|||
|
||||
------------------------
|
||||
The `infer_outlives_requirements` feature indicates that certain
|
||||
outlives requirements can be infered by the compiler rather than
|
||||
outlives requirements can be inferred by the compiler rather than
|
||||
stating them explicitly.
|
||||
|
||||
For example, currently generic struct definitions that contain
|
||||
references, require where-clauses of the form T: 'a. By using
|
||||
this feature the outlives predicates will be infered, although
|
||||
this feature the outlives predicates will be inferred, although
|
||||
they may still be written explicitly.
|
||||
|
||||
```rust,ignore (pseudo-Rust)
|
||||
|
|
|
@ -6,7 +6,7 @@ The tracking issue for this feature is: [#44493]
|
|||
|
||||
------------------------
|
||||
The `infer_static_outlives_requirements` feature indicates that certain
|
||||
`'static` outlives requirements can be infered by the compiler rather than
|
||||
`'static` outlives requirements can be inferred by the compiler rather than
|
||||
stating them explicitly.
|
||||
|
||||
Note: It is an accompanying feature to `infer_outlives_requirements`,
|
||||
|
@ -14,7 +14,7 @@ which must be enabled to infer outlives requirements.
|
|||
|
||||
For example, currently generic struct definitions that contain
|
||||
references, require where-clauses of the form T: 'static. By using
|
||||
this feature the outlives predicates will be infered, although
|
||||
this feature the outlives predicates will be inferred, although
|
||||
they may still be written explicitly.
|
||||
|
||||
```rust,ignore (pseudo-Rust)
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
# `macro_vis_matcher`
|
||||
|
||||
The tracking issue for this feature is: [#41022]
|
||||
|
||||
With this feature gate enabled, the [list of fragment specifiers][frags] gains one more entry:
|
||||
|
||||
* `vis`: a visibility qualifier. Examples: nothing (default visibility); `pub`; `pub(crate)`.
|
||||
|
||||
A `vis` variable may be followed by a comma, ident, type, or path.
|
||||
|
||||
[#41022]: https://github.com/rust-lang/rust/issues/41022
|
||||
[frags]: ../book/first-edition/macros.html#syntactic-requirements
|
||||
|
||||
------------------------
|
|
@ -183,7 +183,6 @@ that warns about any item named `lintme`.
|
|||
```rust,ignore
|
||||
#![feature(plugin_registrar)]
|
||||
#![feature(box_syntax, rustc_private)]
|
||||
#![feature(macro_vis_matcher)]
|
||||
#![feature(macro_at_most_once_rep)]
|
||||
|
||||
extern crate syntax;
|
||||
|
|
|
@ -245,7 +245,7 @@ mod tests {
|
|||
.unwrap_or_else(|_| handle_alloc_error(layout));
|
||||
|
||||
let mut i = ptr.cast::<u8>().as_ptr();
|
||||
let end = i.offset(layout.size() as isize);
|
||||
let end = i.add(layout.size());
|
||||
while i < end {
|
||||
assert_eq!(*i, 0);
|
||||
i = i.offset(1);
|
||||
|
|
|
@ -126,7 +126,9 @@ impl<T: ?Sized> Box<T> {
|
|||
Box(Unique::new_unchecked(raw))
|
||||
}
|
||||
|
||||
/// Consumes the `Box`, returning the wrapped raw pointer.
|
||||
/// Consumes the `Box`, returning a wrapped raw pointer.
|
||||
///
|
||||
/// The pointer will be properly aligned and non-null.
|
||||
///
|
||||
/// After calling this function, the caller is responsible for the
|
||||
/// memory previously managed by the `Box`. In particular, the
|
||||
|
@ -704,7 +706,7 @@ impl<T: Clone> Clone for Box<[T]> {
|
|||
impl<T> Drop for BoxBuilder<T> {
|
||||
fn drop(&mut self) {
|
||||
let mut data = self.data.ptr();
|
||||
let max = unsafe { data.offset(self.len as isize) };
|
||||
let max = unsafe { data.add(self.len) };
|
||||
|
||||
while data != max {
|
||||
unsafe {
|
||||
|
|
|
@ -1151,12 +1151,12 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Leaf>, marker::KV>
|
|||
let new_len = self.node.len() - self.idx - 1;
|
||||
|
||||
ptr::copy_nonoverlapping(
|
||||
self.node.keys().as_ptr().offset(self.idx as isize + 1),
|
||||
self.node.keys().as_ptr().add(self.idx + 1),
|
||||
new_node.keys.as_mut_ptr(),
|
||||
new_len
|
||||
);
|
||||
ptr::copy_nonoverlapping(
|
||||
self.node.vals().as_ptr().offset(self.idx as isize + 1),
|
||||
self.node.vals().as_ptr().add(self.idx + 1),
|
||||
new_node.vals.as_mut_ptr(),
|
||||
new_len
|
||||
);
|
||||
|
@ -1209,17 +1209,17 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
let new_len = self.node.len() - self.idx - 1;
|
||||
|
||||
ptr::copy_nonoverlapping(
|
||||
self.node.keys().as_ptr().offset(self.idx as isize + 1),
|
||||
self.node.keys().as_ptr().add(self.idx + 1),
|
||||
new_node.data.keys.as_mut_ptr(),
|
||||
new_len
|
||||
);
|
||||
ptr::copy_nonoverlapping(
|
||||
self.node.vals().as_ptr().offset(self.idx as isize + 1),
|
||||
self.node.vals().as_ptr().add(self.idx + 1),
|
||||
new_node.data.vals.as_mut_ptr(),
|
||||
new_len
|
||||
);
|
||||
ptr::copy_nonoverlapping(
|
||||
self.node.as_internal().edges.as_ptr().offset(self.idx as isize + 1),
|
||||
self.node.as_internal().edges.as_ptr().add(self.idx + 1),
|
||||
new_node.edges.as_mut_ptr(),
|
||||
new_len + 1
|
||||
);
|
||||
|
@ -1283,14 +1283,14 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
slice_remove(self.node.keys_mut(), self.idx));
|
||||
ptr::copy_nonoverlapping(
|
||||
right_node.keys().as_ptr(),
|
||||
left_node.keys_mut().as_mut_ptr().offset(left_len as isize + 1),
|
||||
left_node.keys_mut().as_mut_ptr().add(left_len + 1),
|
||||
right_len
|
||||
);
|
||||
ptr::write(left_node.vals_mut().get_unchecked_mut(left_len),
|
||||
slice_remove(self.node.vals_mut(), self.idx));
|
||||
ptr::copy_nonoverlapping(
|
||||
right_node.vals().as_ptr(),
|
||||
left_node.vals_mut().as_mut_ptr().offset(left_len as isize + 1),
|
||||
left_node.vals_mut().as_mut_ptr().add(left_len + 1),
|
||||
right_len
|
||||
);
|
||||
|
||||
|
@ -1309,7 +1309,7 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
.as_internal_mut()
|
||||
.edges
|
||||
.as_mut_ptr()
|
||||
.offset(left_len as isize + 1),
|
||||
.add(left_len + 1),
|
||||
right_len + 1
|
||||
);
|
||||
|
||||
|
@ -1394,10 +1394,10 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
|
||||
// Make room for stolen elements in the right child.
|
||||
ptr::copy(right_kv.0,
|
||||
right_kv.0.offset(count as isize),
|
||||
right_kv.0.add(count),
|
||||
right_len);
|
||||
ptr::copy(right_kv.1,
|
||||
right_kv.1.offset(count as isize),
|
||||
right_kv.1.add(count),
|
||||
right_len);
|
||||
|
||||
// Move elements from the left child to the right one.
|
||||
|
@ -1418,7 +1418,7 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
// Make room for stolen edges.
|
||||
let right_edges = right.reborrow_mut().as_internal_mut().edges.as_mut_ptr();
|
||||
ptr::copy(right_edges,
|
||||
right_edges.offset(count as isize),
|
||||
right_edges.add(count),
|
||||
right_len + 1);
|
||||
right.correct_childrens_parent_links(count, count + right_len + 1);
|
||||
|
||||
|
@ -1463,10 +1463,10 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
move_kv(right_kv, count - 1, parent_kv, 0, 1);
|
||||
|
||||
// Fix right indexing
|
||||
ptr::copy(right_kv.0.offset(count as isize),
|
||||
ptr::copy(right_kv.0.add(count),
|
||||
right_kv.0,
|
||||
new_right_len);
|
||||
ptr::copy(right_kv.1.offset(count as isize),
|
||||
ptr::copy(right_kv.1.add(count),
|
||||
right_kv.1,
|
||||
new_right_len);
|
||||
}
|
||||
|
@ -1480,7 +1480,7 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker::
|
|||
|
||||
// Fix right indexing.
|
||||
let right_edges = right.reborrow_mut().as_internal_mut().edges.as_mut_ptr();
|
||||
ptr::copy(right_edges.offset(count as isize),
|
||||
ptr::copy(right_edges.add(count),
|
||||
right_edges,
|
||||
new_right_len + 1);
|
||||
right.correct_childrens_parent_links(0, new_right_len + 1);
|
||||
|
@ -1497,11 +1497,11 @@ unsafe fn move_kv<K, V>(
|
|||
dest: (*mut K, *mut V), dest_offset: usize,
|
||||
count: usize)
|
||||
{
|
||||
ptr::copy_nonoverlapping(source.0.offset(source_offset as isize),
|
||||
dest.0.offset(dest_offset as isize),
|
||||
ptr::copy_nonoverlapping(source.0.add(source_offset),
|
||||
dest.0.add(dest_offset),
|
||||
count);
|
||||
ptr::copy_nonoverlapping(source.1.offset(source_offset as isize),
|
||||
dest.1.offset(dest_offset as isize),
|
||||
ptr::copy_nonoverlapping(source.1.add(source_offset),
|
||||
dest.1.add(dest_offset),
|
||||
count);
|
||||
}
|
||||
|
||||
|
@ -1513,8 +1513,8 @@ unsafe fn move_edges<K, V>(
|
|||
{
|
||||
let source_ptr = source.as_internal_mut().edges.as_mut_ptr();
|
||||
let dest_ptr = dest.as_internal_mut().edges.as_mut_ptr();
|
||||
ptr::copy_nonoverlapping(source_ptr.offset(source_offset as isize),
|
||||
dest_ptr.offset(dest_offset as isize),
|
||||
ptr::copy_nonoverlapping(source_ptr.add(source_offset),
|
||||
dest_ptr.add(dest_offset),
|
||||
count);
|
||||
dest.correct_childrens_parent_links(dest_offset, dest_offset + count);
|
||||
}
|
||||
|
@ -1604,8 +1604,8 @@ pub mod marker {
|
|||
|
||||
unsafe fn slice_insert<T>(slice: &mut [T], idx: usize, val: T) {
|
||||
ptr::copy(
|
||||
slice.as_ptr().offset(idx as isize),
|
||||
slice.as_mut_ptr().offset(idx as isize + 1),
|
||||
slice.as_ptr().add(idx),
|
||||
slice.as_mut_ptr().add(idx + 1),
|
||||
slice.len() - idx
|
||||
);
|
||||
ptr::write(slice.get_unchecked_mut(idx), val);
|
||||
|
@ -1614,8 +1614,8 @@ unsafe fn slice_insert<T>(slice: &mut [T], idx: usize, val: T) {
|
|||
unsafe fn slice_remove<T>(slice: &mut [T], idx: usize) -> T {
|
||||
let ret = ptr::read(slice.get_unchecked(idx));
|
||||
ptr::copy(
|
||||
slice.as_ptr().offset(idx as isize + 1),
|
||||
slice.as_mut_ptr().offset(idx as isize),
|
||||
slice.as_ptr().add(idx + 1),
|
||||
slice.as_mut_ptr().add(idx),
|
||||
slice.len() - idx - 1
|
||||
);
|
||||
ret
|
||||
|
|
|
@ -126,13 +126,13 @@ impl<T> VecDeque<T> {
|
|||
/// Moves an element out of the buffer
|
||||
#[inline]
|
||||
unsafe fn buffer_read(&mut self, off: usize) -> T {
|
||||
ptr::read(self.ptr().offset(off as isize))
|
||||
ptr::read(self.ptr().add(off))
|
||||
}
|
||||
|
||||
/// Writes an element into the buffer, moving it.
|
||||
#[inline]
|
||||
unsafe fn buffer_write(&mut self, off: usize, value: T) {
|
||||
ptr::write(self.ptr().offset(off as isize), value);
|
||||
ptr::write(self.ptr().add(off), value);
|
||||
}
|
||||
|
||||
/// Returns `true` if and only if the buffer is at full capacity.
|
||||
|
@ -177,8 +177,8 @@ impl<T> VecDeque<T> {
|
|||
src,
|
||||
len,
|
||||
self.cap());
|
||||
ptr::copy(self.ptr().offset(src as isize),
|
||||
self.ptr().offset(dst as isize),
|
||||
ptr::copy(self.ptr().add(src),
|
||||
self.ptr().add(dst),
|
||||
len);
|
||||
}
|
||||
|
||||
|
@ -197,8 +197,8 @@ impl<T> VecDeque<T> {
|
|||
src,
|
||||
len,
|
||||
self.cap());
|
||||
ptr::copy_nonoverlapping(self.ptr().offset(src as isize),
|
||||
self.ptr().offset(dst as isize),
|
||||
ptr::copy_nonoverlapping(self.ptr().add(src),
|
||||
self.ptr().add(dst),
|
||||
len);
|
||||
}
|
||||
|
||||
|
@ -436,7 +436,7 @@ impl<T> VecDeque<T> {
|
|||
pub fn get(&self, index: usize) -> Option<&T> {
|
||||
if index < self.len() {
|
||||
let idx = self.wrap_add(self.tail, index);
|
||||
unsafe { Some(&*self.ptr().offset(idx as isize)) }
|
||||
unsafe { Some(&*self.ptr().add(idx)) }
|
||||
} else {
|
||||
None
|
||||
}
|
||||
|
@ -465,7 +465,7 @@ impl<T> VecDeque<T> {
|
|||
pub fn get_mut(&mut self, index: usize) -> Option<&mut T> {
|
||||
if index < self.len() {
|
||||
let idx = self.wrap_add(self.tail, index);
|
||||
unsafe { Some(&mut *self.ptr().offset(idx as isize)) }
|
||||
unsafe { Some(&mut *self.ptr().add(idx)) }
|
||||
} else {
|
||||
None
|
||||
}
|
||||
|
@ -501,8 +501,8 @@ impl<T> VecDeque<T> {
|
|||
let ri = self.wrap_add(self.tail, i);
|
||||
let rj = self.wrap_add(self.tail, j);
|
||||
unsafe {
|
||||
ptr::swap(self.ptr().offset(ri as isize),
|
||||
self.ptr().offset(rj as isize))
|
||||
ptr::swap(self.ptr().add(ri),
|
||||
self.ptr().add(rj))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1805,20 +1805,20 @@ impl<T> VecDeque<T> {
|
|||
// `at` lies in the first half.
|
||||
let amount_in_first = first_len - at;
|
||||
|
||||
ptr::copy_nonoverlapping(first_half.as_ptr().offset(at as isize),
|
||||
ptr::copy_nonoverlapping(first_half.as_ptr().add(at),
|
||||
other.ptr(),
|
||||
amount_in_first);
|
||||
|
||||
// just take all of the second half.
|
||||
ptr::copy_nonoverlapping(second_half.as_ptr(),
|
||||
other.ptr().offset(amount_in_first as isize),
|
||||
other.ptr().add(amount_in_first),
|
||||
second_len);
|
||||
} else {
|
||||
// `at` lies in the second half, need to factor in the elements we skipped
|
||||
// in the first half.
|
||||
let offset = at - first_len;
|
||||
let amount_in_second = second_len - offset;
|
||||
ptr::copy_nonoverlapping(second_half.as_ptr().offset(offset as isize),
|
||||
ptr::copy_nonoverlapping(second_half.as_ptr().add(offset),
|
||||
other.ptr(),
|
||||
amount_in_second);
|
||||
}
|
||||
|
@ -2709,24 +2709,24 @@ impl<T> From<VecDeque<T>> for Vec<T> {
|
|||
|
||||
// Need to move the ring to the front of the buffer, as vec will expect this.
|
||||
if other.is_contiguous() {
|
||||
ptr::copy(buf.offset(tail as isize), buf, len);
|
||||
ptr::copy(buf.add(tail), buf, len);
|
||||
} else {
|
||||
if (tail - head) >= cmp::min(cap - tail, head) {
|
||||
// There is enough free space in the centre for the shortest block so we can
|
||||
// do this in at most three copy moves.
|
||||
if (cap - tail) > head {
|
||||
// right hand block is the long one; move that enough for the left
|
||||
ptr::copy(buf.offset(tail as isize),
|
||||
buf.offset((tail - head) as isize),
|
||||
ptr::copy(buf.add(tail),
|
||||
buf.add(tail - head),
|
||||
cap - tail);
|
||||
// copy left in the end
|
||||
ptr::copy(buf, buf.offset((cap - head) as isize), head);
|
||||
ptr::copy(buf, buf.add(cap - head), head);
|
||||
// shift the new thing to the start
|
||||
ptr::copy(buf.offset((tail - head) as isize), buf, len);
|
||||
ptr::copy(buf.add(tail - head), buf, len);
|
||||
} else {
|
||||
// left hand block is the long one, we can do it in two!
|
||||
ptr::copy(buf, buf.offset((cap - tail) as isize), head);
|
||||
ptr::copy(buf.offset(tail as isize), buf, cap - tail);
|
||||
ptr::copy(buf, buf.add(cap - tail), head);
|
||||
ptr::copy(buf.add(tail), buf, cap - tail);
|
||||
}
|
||||
} else {
|
||||
// Need to use N swaps to move the ring
|
||||
|
@ -2751,7 +2751,7 @@ impl<T> From<VecDeque<T>> for Vec<T> {
|
|||
for i in left_edge..right_edge {
|
||||
right_offset = (i - left_edge) % (cap - right_edge);
|
||||
let src: isize = (right_edge + right_offset) as isize;
|
||||
ptr::swap(buf.offset(i as isize), buf.offset(src));
|
||||
ptr::swap(buf.add(i), buf.offset(src));
|
||||
}
|
||||
let n_ops = right_edge - left_edge;
|
||||
left_edge += n_ops;
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
// option. This file may not be copied, modified, or distributed
|
||||
// except according to those terms.
|
||||
|
||||
#![unstable(feature = "raw_vec_internals", reason = "implemention detail", issue = "0")]
|
||||
#![unstable(feature = "raw_vec_internals", reason = "implementation detail", issue = "0")]
|
||||
#![doc(hidden)]
|
||||
|
||||
use core::cmp;
|
||||
|
@ -282,7 +282,7 @@ impl<T, A: Alloc> RawVec<T, A> {
|
|||
/// // double would have aborted or panicked if the len exceeded
|
||||
/// // `isize::MAX` so this is safe to do unchecked now.
|
||||
/// unsafe {
|
||||
/// ptr::write(self.buf.ptr().offset(self.len as isize), elem);
|
||||
/// ptr::write(self.buf.ptr().add(self.len), elem);
|
||||
/// }
|
||||
/// self.len += 1;
|
||||
/// }
|
||||
|
@ -487,7 +487,7 @@ impl<T, A: Alloc> RawVec<T, A> {
|
|||
/// // `isize::MAX` so this is safe to do unchecked now.
|
||||
/// for x in elems {
|
||||
/// unsafe {
|
||||
/// ptr::write(self.buf.ptr().offset(self.len as isize), x.clone());
|
||||
/// ptr::write(self.buf.ptr().add(self.len), x.clone());
|
||||
/// }
|
||||
/// self.len += 1;
|
||||
/// }
|
||||
|
|
|
@ -771,7 +771,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> {
|
|||
};
|
||||
|
||||
for (i, item) in v.iter().enumerate() {
|
||||
ptr::write(elems.offset(i as isize), item.clone());
|
||||
ptr::write(elems.add(i), item.clone());
|
||||
guard.n_elems += 1;
|
||||
}
|
||||
|
||||
|
|
|
@ -715,8 +715,8 @@ unsafe fn merge<T, F>(v: &mut [T], mid: usize, buf: *mut T, is_less: &mut F)
|
|||
{
|
||||
let len = v.len();
|
||||
let v = v.as_mut_ptr();
|
||||
let v_mid = v.offset(mid as isize);
|
||||
let v_end = v.offset(len as isize);
|
||||
let v_mid = v.add(mid);
|
||||
let v_end = v.add(len);
|
||||
|
||||
// The merge process first copies the shorter run into `buf`. Then it traces the newly copied
|
||||
// run and the longer run forwards (or backwards), comparing their next unconsumed elements and
|
||||
|
@ -742,7 +742,7 @@ unsafe fn merge<T, F>(v: &mut [T], mid: usize, buf: *mut T, is_less: &mut F)
|
|||
ptr::copy_nonoverlapping(v, buf, mid);
|
||||
hole = MergeHole {
|
||||
start: buf,
|
||||
end: buf.offset(mid as isize),
|
||||
end: buf.add(mid),
|
||||
dest: v,
|
||||
};
|
||||
|
||||
|
@ -766,7 +766,7 @@ unsafe fn merge<T, F>(v: &mut [T], mid: usize, buf: *mut T, is_less: &mut F)
|
|||
ptr::copy_nonoverlapping(v_mid, buf, len - mid);
|
||||
hole = MergeHole {
|
||||
start: buf,
|
||||
end: buf.offset((len - mid) as isize),
|
||||
end: buf.add(len - mid),
|
||||
dest: v_mid,
|
||||
};
|
||||
|
||||
|
|
|
@ -1190,8 +1190,8 @@ impl String {
|
|||
let next = idx + ch.len_utf8();
|
||||
let len = self.len();
|
||||
unsafe {
|
||||
ptr::copy(self.vec.as_ptr().offset(next as isize),
|
||||
self.vec.as_mut_ptr().offset(idx as isize),
|
||||
ptr::copy(self.vec.as_ptr().add(next),
|
||||
self.vec.as_mut_ptr().add(idx),
|
||||
len - next);
|
||||
self.vec.set_len(len - (next - idx));
|
||||
}
|
||||
|
@ -1232,8 +1232,8 @@ impl String {
|
|||
del_bytes += ch_len;
|
||||
} else if del_bytes > 0 {
|
||||
unsafe {
|
||||
ptr::copy(self.vec.as_ptr().offset(idx as isize),
|
||||
self.vec.as_mut_ptr().offset((idx - del_bytes) as isize),
|
||||
ptr::copy(self.vec.as_ptr().add(idx),
|
||||
self.vec.as_mut_ptr().add(idx - del_bytes),
|
||||
ch_len);
|
||||
}
|
||||
}
|
||||
|
@ -1289,11 +1289,11 @@ impl String {
|
|||
let amt = bytes.len();
|
||||
self.vec.reserve(amt);
|
||||
|
||||
ptr::copy(self.vec.as_ptr().offset(idx as isize),
|
||||
self.vec.as_mut_ptr().offset((idx + amt) as isize),
|
||||
ptr::copy(self.vec.as_ptr().add(idx),
|
||||
self.vec.as_mut_ptr().add(idx + amt),
|
||||
len - idx);
|
||||
ptr::copy(bytes.as_ptr(),
|
||||
self.vec.as_mut_ptr().offset(idx as isize),
|
||||
self.vec.as_mut_ptr().add(idx),
|
||||
amt);
|
||||
self.vec.set_len(len + amt);
|
||||
}
|
||||
|
|
|
@ -672,7 +672,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> {
|
|||
};
|
||||
|
||||
for (i, item) in v.iter().enumerate() {
|
||||
ptr::write(elems.offset(i as isize), item.clone());
|
||||
ptr::write(elems.add(i), item.clone());
|
||||
guard.n_elems += 1;
|
||||
}
|
||||
|
||||
|
|
|
@ -692,7 +692,7 @@ impl<T> Vec<T> {
|
|||
pub fn truncate(&mut self, len: usize) {
|
||||
let current_len = self.len;
|
||||
unsafe {
|
||||
let mut ptr = self.as_mut_ptr().offset(self.len as isize);
|
||||
let mut ptr = self.as_mut_ptr().add(self.len);
|
||||
// Set the final length at the end, keeping in mind that
|
||||
// dropping an element might panic. Works around a missed
|
||||
// optimization, as seen in the following issue:
|
||||
|
@ -856,7 +856,7 @@ impl<T> Vec<T> {
|
|||
// infallible
|
||||
// The spot to put the new value
|
||||
{
|
||||
let p = self.as_mut_ptr().offset(index as isize);
|
||||
let p = self.as_mut_ptr().add(index);
|
||||
// Shift everything over to make space. (Duplicating the
|
||||
// `index`th element into two consecutive places.)
|
||||
ptr::copy(p, p.offset(1), len - index);
|
||||
|
@ -891,7 +891,7 @@ impl<T> Vec<T> {
|
|||
let ret;
|
||||
{
|
||||
// the place we are taking from.
|
||||
let ptr = self.as_mut_ptr().offset(index as isize);
|
||||
let ptr = self.as_mut_ptr().add(index);
|
||||
// copy it out, unsafely having a copy of the value on
|
||||
// the stack and in the vector at the same time.
|
||||
ret = ptr::read(ptr);
|
||||
|
@ -1034,8 +1034,8 @@ impl<T> Vec<T> {
|
|||
let mut w: usize = 1;
|
||||
|
||||
while r < ln {
|
||||
let p_r = p.offset(r as isize);
|
||||
let p_wm1 = p.offset((w - 1) as isize);
|
||||
let p_r = p.add(r);
|
||||
let p_wm1 = p.add(w - 1);
|
||||
if !same_bucket(&mut *p_r, &mut *p_wm1) {
|
||||
if r != w {
|
||||
let p_w = p_wm1.offset(1);
|
||||
|
@ -1072,7 +1072,7 @@ impl<T> Vec<T> {
|
|||
self.reserve(1);
|
||||
}
|
||||
unsafe {
|
||||
let end = self.as_mut_ptr().offset(self.len as isize);
|
||||
let end = self.as_mut_ptr().add(self.len);
|
||||
ptr::write(end, value);
|
||||
self.len += 1;
|
||||
}
|
||||
|
@ -1196,7 +1196,7 @@ impl<T> Vec<T> {
|
|||
self.set_len(start);
|
||||
// Use the borrow in the IterMut to indicate borrowing behavior of the
|
||||
// whole Drain iterator (like &mut T).
|
||||
let range_slice = slice::from_raw_parts_mut(self.as_mut_ptr().offset(start as isize),
|
||||
let range_slice = slice::from_raw_parts_mut(self.as_mut_ptr().add(start),
|
||||
end - start);
|
||||
Drain {
|
||||
tail_start: end,
|
||||
|
@ -1290,7 +1290,7 @@ impl<T> Vec<T> {
|
|||
self.set_len(at);
|
||||
other.set_len(other_len);
|
||||
|
||||
ptr::copy_nonoverlapping(self.as_ptr().offset(at as isize),
|
||||
ptr::copy_nonoverlapping(self.as_ptr().add(at),
|
||||
other.as_mut_ptr(),
|
||||
other.len());
|
||||
}
|
||||
|
@ -1473,7 +1473,7 @@ impl<T> Vec<T> {
|
|||
self.reserve(n);
|
||||
|
||||
unsafe {
|
||||
let mut ptr = self.as_mut_ptr().offset(self.len() as isize);
|
||||
let mut ptr = self.as_mut_ptr().add(self.len());
|
||||
// Use SetLenOnDrop to work around bug where compiler
|
||||
// may not realize the store through `ptr` through self.set_len()
|
||||
// don't alias.
|
||||
|
@ -1799,7 +1799,7 @@ impl<T> IntoIterator for Vec<T> {
|
|||
let end = if mem::size_of::<T>() == 0 {
|
||||
arith_offset(begin as *const i8, self.len() as isize) as *const T
|
||||
} else {
|
||||
begin.offset(self.len() as isize) as *const T
|
||||
begin.add(self.len()) as *const T
|
||||
};
|
||||
let cap = self.buf.cap();
|
||||
mem::forget(self);
|
||||
|
@ -1898,7 +1898,7 @@ impl<T, I> SpecExtend<T, I> for Vec<T>
|
|||
if let Some(additional) = high {
|
||||
self.reserve(additional);
|
||||
unsafe {
|
||||
let mut ptr = self.as_mut_ptr().offset(self.len() as isize);
|
||||
let mut ptr = self.as_mut_ptr().add(self.len());
|
||||
let mut local_len = SetLenOnDrop::new(&mut self.len);
|
||||
for element in iterator {
|
||||
ptr::write(ptr, element);
|
||||
|
@ -2561,8 +2561,8 @@ impl<'a, T> Drop for Drain<'a, T> {
|
|||
let start = source_vec.len();
|
||||
let tail = self.tail_start;
|
||||
if tail != start {
|
||||
let src = source_vec.as_ptr().offset(tail as isize);
|
||||
let dst = source_vec.as_mut_ptr().offset(start as isize);
|
||||
let src = source_vec.as_ptr().add(tail);
|
||||
let dst = source_vec.as_mut_ptr().add(start);
|
||||
ptr::copy(src, dst, self.tail_len);
|
||||
}
|
||||
source_vec.set_len(start + self.tail_len);
|
||||
|
@ -2672,7 +2672,7 @@ impl<'a, T> Drain<'a, T> {
|
|||
let range_start = vec.len;
|
||||
let range_end = self.tail_start;
|
||||
let range_slice = slice::from_raw_parts_mut(
|
||||
vec.as_mut_ptr().offset(range_start as isize),
|
||||
vec.as_mut_ptr().add(range_start),
|
||||
range_end - range_start);
|
||||
|
||||
for place in range_slice {
|
||||
|
@ -2693,8 +2693,8 @@ impl<'a, T> Drain<'a, T> {
|
|||
vec.buf.reserve(used_capacity, extra_capacity);
|
||||
|
||||
let new_tail_start = self.tail_start + extra_capacity;
|
||||
let src = vec.as_ptr().offset(self.tail_start as isize);
|
||||
let dst = vec.as_mut_ptr().offset(new_tail_start as isize);
|
||||
let src = vec.as_ptr().add(self.tail_start);
|
||||
let dst = vec.as_mut_ptr().add(new_tail_start);
|
||||
ptr::copy(src, dst, self.tail_len);
|
||||
self.tail_start = new_tail_start;
|
||||
}
|
||||
|
|
|
@ -249,7 +249,7 @@ mod platform {
|
|||
}
|
||||
|
||||
unsafe fn align_ptr(ptr: *mut u8, align: usize) -> *mut u8 {
|
||||
let aligned = ptr.offset((align - (ptr as usize & (align - 1))) as isize);
|
||||
let aligned = ptr.add(align - (ptr as usize & (align - 1)));
|
||||
*get_header(aligned) = Header(ptr);
|
||||
aligned
|
||||
}
|
||||
|
|
|
@ -106,7 +106,7 @@ impl<T> TypedArenaChunk<T> {
|
|||
// A pointer as large as possible for zero-sized elements.
|
||||
!0 as *mut T
|
||||
} else {
|
||||
self.start().offset(self.storage.cap() as isize)
|
||||
self.start().add(self.storage.cap())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -179,7 +179,7 @@ impl<T> TypedArena<T> {
|
|||
unsafe {
|
||||
let start_ptr = self.ptr.get();
|
||||
let arena_slice = slice::from_raw_parts_mut(start_ptr, slice.len());
|
||||
self.ptr.set(start_ptr.offset(arena_slice.len() as isize));
|
||||
self.ptr.set(start_ptr.add(arena_slice.len()));
|
||||
arena_slice.copy_from_slice(slice);
|
||||
arena_slice
|
||||
}
|
||||
|
|
|
@ -27,7 +27,7 @@ use task::{Context, Poll};
|
|||
/// - The `Future` trait is currently not object safe: The `Future::poll`
|
||||
/// method makes uses the arbitrary self types feature and traits in which
|
||||
/// this feature is used are currently not object safe due to current compiler
|
||||
/// limitations. (See tracking issue for arbitray self types for more
|
||||
/// limitations. (See tracking issue for arbitrary self types for more
|
||||
/// information #44874)
|
||||
pub struct LocalFutureObj<'a, T> {
|
||||
ptr: *mut (),
|
||||
|
@ -102,7 +102,7 @@ impl<'a, T> Drop for LocalFutureObj<'a, T> {
|
|||
/// - The `Future` trait is currently not object safe: The `Future::poll`
|
||||
/// method makes uses the arbitrary self types feature and traits in which
|
||||
/// this feature is used are currently not object safe due to current compiler
|
||||
/// limitations. (See tracking issue for arbitray self types for more
|
||||
/// limitations. (See tracking issue for arbitrary self types for more
|
||||
/// information #44874)
|
||||
pub struct FutureObj<'a, T>(LocalFutureObj<'a, T>);
|
||||
|
||||
|
|
|
@ -918,7 +918,7 @@ extern "rust-intrinsic" {
|
|||
/// // treat it as "dead", and therefore, you only have two real
|
||||
/// // mutable slices.
|
||||
/// (slice::from_raw_parts_mut(ptr, mid),
|
||||
/// slice::from_raw_parts_mut(ptr.offset(mid as isize), len - mid))
|
||||
/// slice::from_raw_parts_mut(ptr.add(mid), len - mid))
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
|
|
|
@ -511,7 +511,7 @@ macro_rules! impls{
|
|||
/// let ptr = vec.as_ptr();
|
||||
/// Slice {
|
||||
/// start: ptr,
|
||||
/// end: unsafe { ptr.offset(vec.len() as isize) },
|
||||
/// end: unsafe { ptr.add(vec.len()) },
|
||||
/// phantom: PhantomData,
|
||||
/// }
|
||||
/// }
|
||||
|
@ -603,15 +603,35 @@ unsafe impl<T: ?Sized> Freeze for *mut T {}
|
|||
unsafe impl<'a, T: ?Sized> Freeze for &'a T {}
|
||||
unsafe impl<'a, T: ?Sized> Freeze for &'a mut T {}
|
||||
|
||||
/// Types which can be moved out of a `PinMut`.
|
||||
/// Types which can be safely moved after being pinned.
|
||||
///
|
||||
/// The `Unpin` trait is used to control the behavior of the [`PinMut`] type. If a
|
||||
/// type implements `Unpin`, it is safe to move a value of that type out of the
|
||||
/// `PinMut` pointer.
|
||||
/// Since Rust itself has no notion of immovable types, and will consider moves to always be safe,
|
||||
/// this trait cannot prevent types from moving by itself.
|
||||
///
|
||||
/// Instead it can be used to prevent moves through the type system,
|
||||
/// by controlling the behavior of special pointer types like [`PinMut`],
|
||||
/// which "pin" the type in place by not allowing it to be moved out of them.
|
||||
///
|
||||
/// Implementing this trait lifts the restrictions of pinning off a type,
|
||||
/// which then allows it to move out with functions such as [`replace`].
|
||||
///
|
||||
/// So this, for example, can only be done on types implementing `Unpin`:
|
||||
///
|
||||
/// ```rust
|
||||
/// #![feature(pin)]
|
||||
/// use std::mem::{PinMut, replace};
|
||||
///
|
||||
/// let mut string = "this".to_string();
|
||||
/// let mut pinned_string = PinMut::new(&mut string);
|
||||
///
|
||||
/// // dereferencing the pointer mutably is only possible because String implements Unpin
|
||||
/// replace(&mut *pinned_string, "other".to_string());
|
||||
/// ```
|
||||
///
|
||||
/// This trait is automatically implemented for almost every type.
|
||||
///
|
||||
/// [`PinMut`]: ../mem/struct.PinMut.html
|
||||
/// [`replace`]: ../mem/fn.replace.html
|
||||
#[unstable(feature = "pin", issue = "49150")]
|
||||
pub auto trait Unpin {}
|
||||
|
||||
|
|
|
@ -34,22 +34,32 @@ macro_rules! impl_nonzero_fmt {
|
|||
}
|
||||
}
|
||||
|
||||
macro_rules! doc_comment {
|
||||
($x:expr, $($tt:tt)*) => {
|
||||
#[doc = $x]
|
||||
$($tt)*
|
||||
};
|
||||
}
|
||||
|
||||
macro_rules! nonzero_integers {
|
||||
( $( $Ty: ident($Int: ty); )+ ) => {
|
||||
$(
|
||||
/// An integer that is known not to equal zero.
|
||||
///
|
||||
/// This enables some memory layout optimization.
|
||||
/// For example, `Option<NonZeroU32>` is the same size as `u32`:
|
||||
///
|
||||
/// ```rust
|
||||
/// use std::mem::size_of;
|
||||
/// assert_eq!(size_of::<Option<std::num::NonZeroU32>>(), size_of::<u32>());
|
||||
/// ```
|
||||
#[stable(feature = "nonzero", since = "1.28.0")]
|
||||
#[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]
|
||||
#[repr(transparent)]
|
||||
pub struct $Ty(NonZero<$Int>);
|
||||
doc_comment! {
|
||||
concat!("An integer that is known not to equal zero.
|
||||
|
||||
This enables some memory layout optimization.
|
||||
For example, `Option<", stringify!($Ty), ">` is the same size as `", stringify!($Int), "`:
|
||||
|
||||
```rust
|
||||
use std::mem::size_of;
|
||||
assert_eq!(size_of::<Option<std::num::", stringify!($Ty), ">>(), size_of::<", stringify!($Int),
|
||||
">());
|
||||
```"),
|
||||
#[stable(feature = "nonzero", since = "1.28.0")]
|
||||
#[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)]
|
||||
#[repr(transparent)]
|
||||
pub struct $Ty(NonZero<$Int>);
|
||||
}
|
||||
|
||||
impl $Ty {
|
||||
/// Create a non-zero without checking the value.
|
||||
|
@ -176,13 +186,6 @@ pub mod dec2flt;
|
|||
pub mod bignum;
|
||||
pub mod diy_float;
|
||||
|
||||
macro_rules! doc_comment {
|
||||
($x:expr, $($tt:tt)*) => {
|
||||
#[doc = $x]
|
||||
$($tt)*
|
||||
};
|
||||
}
|
||||
|
||||
mod wrapping;
|
||||
|
||||
// `Int` + `SignedInt` implemented for signed integers
|
||||
|
|
|
@ -66,6 +66,11 @@
|
|||
#[lang = "fn"]
|
||||
#[stable(feature = "rust1", since = "1.0.0")]
|
||||
#[rustc_paren_sugar]
|
||||
#[rustc_on_unimplemented(
|
||||
on(Args="()", note="wrap the `{Self}` in a closure with no arguments: `|| {{ /* code */ }}"),
|
||||
message="expected a `{Fn}<{Args}>` closure, found `{Self}`",
|
||||
label="expected an `Fn<{Args}>` closure, found `{Self}`",
|
||||
)]
|
||||
#[fundamental] // so that regex can rely that `&str: !FnMut`
|
||||
pub trait Fn<Args> : FnMut<Args> {
|
||||
/// Performs the call operation.
|
||||
|
@ -139,6 +144,11 @@ pub trait Fn<Args> : FnMut<Args> {
|
|||
#[lang = "fn_mut"]
|
||||
#[stable(feature = "rust1", since = "1.0.0")]
|
||||
#[rustc_paren_sugar]
|
||||
#[rustc_on_unimplemented(
|
||||
on(Args="()", note="wrap the `{Self}` in a closure with no arguments: `|| {{ /* code */ }}"),
|
||||
message="expected a `{FnMut}<{Args}>` closure, found `{Self}`",
|
||||
label="expected an `FnMut<{Args}>` closure, found `{Self}`",
|
||||
)]
|
||||
#[fundamental] // so that regex can rely that `&str: !FnMut`
|
||||
pub trait FnMut<Args> : FnOnce<Args> {
|
||||
/// Performs the call operation.
|
||||
|
@ -212,6 +222,11 @@ pub trait FnMut<Args> : FnOnce<Args> {
|
|||
#[lang = "fn_once"]
|
||||
#[stable(feature = "rust1", since = "1.0.0")]
|
||||
#[rustc_paren_sugar]
|
||||
#[rustc_on_unimplemented(
|
||||
on(Args="()", note="wrap the `{Self}` in a closure with no arguments: `|| {{ /* code */ }}"),
|
||||
message="expected a `{FnOnce}<{Args}>` closure, found `{Self}`",
|
||||
label="expected an `FnOnce<{Args}>` closure, found `{Self}`",
|
||||
)]
|
||||
#[fundamental] // so that regex can rely that `&str: !FnMut`
|
||||
pub trait FnOnce<Args> {
|
||||
/// The returned type after the call operator is used.
|
||||
|
|
|
@ -226,8 +226,8 @@ unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, len: usize) {
|
|||
// Declaring `t` here avoids aligning the stack when this loop is unused
|
||||
let mut t: Block = mem::uninitialized();
|
||||
let t = &mut t as *mut _ as *mut u8;
|
||||
let x = x.offset(i as isize);
|
||||
let y = y.offset(i as isize);
|
||||
let x = x.add(i);
|
||||
let y = y.add(i);
|
||||
|
||||
// Swap a block of bytes of x & y, using t as a temporary buffer
|
||||
// This should be optimized into efficient SIMD operations where available
|
||||
|
@ -243,8 +243,8 @@ unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, len: usize) {
|
|||
let rem = len - i;
|
||||
|
||||
let t = &mut t as *mut _ as *mut u8;
|
||||
let x = x.offset(i as isize);
|
||||
let y = y.offset(i as isize);
|
||||
let x = x.add(i);
|
||||
let y = y.add(i);
|
||||
|
||||
copy_nonoverlapping(x, t, rem);
|
||||
copy_nonoverlapping(y, x, rem);
|
||||
|
@ -613,7 +613,7 @@ impl<T: ?Sized> *const T {
|
|||
/// The compiler and standard library generally tries to ensure allocations
|
||||
/// never reach a size where an offset is a concern. For instance, `Vec`
|
||||
/// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
|
||||
/// `vec.as_ptr().offset(vec.len() as isize)` is always safe.
|
||||
/// `vec.as_ptr().add(vec.len())` is always safe.
|
||||
///
|
||||
/// Most platforms fundamentally can't even construct such an allocation.
|
||||
/// For instance, no known 64-bit platform can ever serve a request
|
||||
|
@ -1231,7 +1231,7 @@ impl<T: ?Sized> *const T {
|
|||
/// let ptr = &x[n] as *const u8;
|
||||
/// let offset = ptr.align_offset(align_of::<u16>());
|
||||
/// if offset < x.len() - n - 1 {
|
||||
/// let u16_ptr = ptr.offset(offset as isize) as *const u16;
|
||||
/// let u16_ptr = ptr.add(offset) as *const u16;
|
||||
/// assert_ne!(*u16_ptr, 500);
|
||||
/// } else {
|
||||
/// // while the pointer can be aligned via `offset`, it would point
|
||||
|
@ -1334,7 +1334,7 @@ impl<T: ?Sized> *mut T {
|
|||
/// The compiler and standard library generally tries to ensure allocations
|
||||
/// never reach a size where an offset is a concern. For instance, `Vec`
|
||||
/// and `Box` ensure they never allocate more than `isize::MAX` bytes, so
|
||||
/// `vec.as_ptr().offset(vec.len() as isize)` is always safe.
|
||||
/// `vec.as_ptr().add(vec.len())` is always safe.
|
||||
///
|
||||
/// Most platforms fundamentally can't even construct such an allocation.
|
||||
/// For instance, no known 64-bit platform can ever serve a request
|
||||
|
@ -2261,7 +2261,7 @@ impl<T: ?Sized> *mut T {
|
|||
/// let ptr = &x[n] as *const u8;
|
||||
/// let offset = ptr.align_offset(align_of::<u16>());
|
||||
/// if offset < x.len() - n - 1 {
|
||||
/// let u16_ptr = ptr.offset(offset as isize) as *const u16;
|
||||
/// let u16_ptr = ptr.add(offset) as *const u16;
|
||||
/// assert_ne!(*u16_ptr, 500);
|
||||
/// } else {
|
||||
/// // while the pointer can be aligned via `offset`, it would point
|
||||
|
@ -2291,7 +2291,7 @@ impl<T: ?Sized> *mut T {
|
|||
///
|
||||
/// If we ever decide to make it possible to call the intrinsic with `a` that is not a
|
||||
/// power-of-two, it will probably be more prudent to just change to a naive implementation rather
|
||||
/// than trying to adapt this to accomodate that change.
|
||||
/// than trying to adapt this to accommodate that change.
|
||||
///
|
||||
/// Any questions go to @nagisa.
|
||||
#[lang="align_offset"]
|
||||
|
|
|
@ -72,8 +72,8 @@ pub fn memchr(x: u8, text: &[u8]) -> Option<usize> {
|
|||
if len >= 2 * usize_bytes {
|
||||
while offset <= len - 2 * usize_bytes {
|
||||
unsafe {
|
||||
let u = *(ptr.offset(offset as isize) as *const usize);
|
||||
let v = *(ptr.offset((offset + usize_bytes) as isize) as *const usize);
|
||||
let u = *(ptr.add(offset) as *const usize);
|
||||
let v = *(ptr.add(offset + usize_bytes) as *const usize);
|
||||
|
||||
// break if there is a matching byte
|
||||
let zu = contains_zero_byte(u ^ repeated_x);
|
||||
|
|
|
@ -383,7 +383,7 @@ impl<T> [T] {
|
|||
///
|
||||
/// unsafe {
|
||||
/// for i in 0..x.len() {
|
||||
/// assert_eq!(x.get_unchecked(i), &*x_ptr.offset(i as isize));
|
||||
/// assert_eq!(x.get_unchecked(i), &*x_ptr.add(i));
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
|
@ -410,7 +410,7 @@ impl<T> [T] {
|
|||
///
|
||||
/// unsafe {
|
||||
/// for i in 0..x.len() {
|
||||
/// *x_ptr.offset(i as isize) += 2;
|
||||
/// *x_ptr.add(i) += 2;
|
||||
/// }
|
||||
/// }
|
||||
/// assert_eq!(x, &[3, 4, 6]);
|
||||
|
@ -546,9 +546,9 @@ impl<T> [T] {
|
|||
assume(!ptr.is_null());
|
||||
|
||||
let end = if mem::size_of::<T>() == 0 {
|
||||
(ptr as *const u8).wrapping_offset(self.len() as isize) as *const T
|
||||
(ptr as *const u8).wrapping_add(self.len()) as *const T
|
||||
} else {
|
||||
ptr.offset(self.len() as isize)
|
||||
ptr.add(self.len())
|
||||
};
|
||||
|
||||
Iter {
|
||||
|
@ -578,9 +578,9 @@ impl<T> [T] {
|
|||
assume(!ptr.is_null());
|
||||
|
||||
let end = if mem::size_of::<T>() == 0 {
|
||||
(ptr as *mut u8).wrapping_offset(self.len() as isize) as *mut T
|
||||
(ptr as *mut u8).wrapping_add(self.len()) as *mut T
|
||||
} else {
|
||||
ptr.offset(self.len() as isize)
|
||||
ptr.add(self.len())
|
||||
};
|
||||
|
||||
IterMut {
|
||||
|
@ -842,7 +842,7 @@ impl<T> [T] {
|
|||
assert!(mid <= len);
|
||||
|
||||
(from_raw_parts_mut(ptr, mid),
|
||||
from_raw_parts_mut(ptr.offset(mid as isize), len - mid))
|
||||
from_raw_parts_mut(ptr.add(mid), len - mid))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1444,7 +1444,7 @@ impl<T> [T] {
|
|||
|
||||
unsafe {
|
||||
let p = self.as_mut_ptr();
|
||||
rotate::ptr_rotate(mid, p.offset(mid as isize), k);
|
||||
rotate::ptr_rotate(mid, p.add(mid), k);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1485,7 +1485,7 @@ impl<T> [T] {
|
|||
|
||||
unsafe {
|
||||
let p = self.as_mut_ptr();
|
||||
rotate::ptr_rotate(mid, p.offset(mid as isize), k);
|
||||
rotate::ptr_rotate(mid, p.add(mid), k);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1680,7 +1680,7 @@ impl<T> [T] {
|
|||
}
|
||||
}
|
||||
|
||||
/// Function to calculate lenghts of the middle and trailing slice for `align_to{,_mut}`.
|
||||
/// Function to calculate lengths of the middle and trailing slice for `align_to{,_mut}`.
|
||||
fn align_to_offsets<U>(&self) -> (usize, usize) {
|
||||
// What we gonna do about `rest` is figure out what multiple of `U`s we can put in a
|
||||
// lowest number of `T`s. And how many `T`s we need for each such "multiple".
|
||||
|
@ -1740,7 +1740,7 @@ impl<T> [T] {
|
|||
(us_len, ts_len)
|
||||
}
|
||||
|
||||
/// Transmute the slice to a slice of another type, ensuring aligment of the types is
|
||||
/// Transmute the slice to a slice of another type, ensuring alignment of the types is
|
||||
/// maintained.
|
||||
///
|
||||
/// This method splits the slice into three distinct slices: prefix, correctly aligned middle
|
||||
|
@ -1789,11 +1789,11 @@ impl<T> [T] {
|
|||
let (us_len, ts_len) = rest.align_to_offsets::<U>();
|
||||
(left,
|
||||
from_raw_parts(rest.as_ptr() as *const U, us_len),
|
||||
from_raw_parts(rest.as_ptr().offset((rest.len() - ts_len) as isize), ts_len))
|
||||
from_raw_parts(rest.as_ptr().add(rest.len() - ts_len), ts_len))
|
||||
}
|
||||
}
|
||||
|
||||
/// Transmute the slice to a slice of another type, ensuring aligment of the types is
|
||||
/// Transmute the slice to a slice of another type, ensuring alignment of the types is
|
||||
/// maintained.
|
||||
///
|
||||
/// This method splits the slice into three distinct slices: prefix, correctly aligned middle
|
||||
|
@ -1843,7 +1843,7 @@ impl<T> [T] {
|
|||
let mut_ptr = rest.as_mut_ptr();
|
||||
(left,
|
||||
from_raw_parts_mut(mut_ptr as *mut U, us_len),
|
||||
from_raw_parts_mut(mut_ptr.offset((rest.len() - ts_len) as isize), ts_len))
|
||||
from_raw_parts_mut(mut_ptr.add(rest.len() - ts_len), ts_len))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2037,12 +2037,12 @@ impl<T> SliceIndex<[T]> for usize {
|
|||
|
||||
#[inline]
|
||||
unsafe fn get_unchecked(self, slice: &[T]) -> &T {
|
||||
&*slice.as_ptr().offset(self as isize)
|
||||
&*slice.as_ptr().add(self)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut T {
|
||||
&mut *slice.as_mut_ptr().offset(self as isize)
|
||||
&mut *slice.as_mut_ptr().add(self)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -2086,12 +2086,12 @@ impl<T> SliceIndex<[T]> for ops::Range<usize> {
|
|||
|
||||
#[inline]
|
||||
unsafe fn get_unchecked(self, slice: &[T]) -> &[T] {
|
||||
from_raw_parts(slice.as_ptr().offset(self.start as isize), self.end - self.start)
|
||||
from_raw_parts(slice.as_ptr().add(self.start), self.end - self.start)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn get_unchecked_mut(self, slice: &mut [T]) -> &mut [T] {
|
||||
from_raw_parts_mut(slice.as_mut_ptr().offset(self.start as isize), self.end - self.start)
|
||||
from_raw_parts_mut(slice.as_mut_ptr().add(self.start), self.end - self.start)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
|
@ -2467,7 +2467,7 @@ macro_rules! iterator {
|
|||
}
|
||||
// We are in bounds. `offset` does the right thing even for ZSTs.
|
||||
unsafe {
|
||||
let elem = Some(& $( $mut_ )* *self.ptr.offset(n as isize));
|
||||
let elem = Some(& $( $mut_ )* *self.ptr.add(n));
|
||||
self.post_inc_start((n as isize).wrapping_add(1));
|
||||
elem
|
||||
}
|
||||
|
@ -3347,7 +3347,7 @@ impl<'a, T> FusedIterator for Windows<'a, T> {}
|
|||
#[doc(hidden)]
|
||||
unsafe impl<'a, T> TrustedRandomAccess for Windows<'a, T> {
|
||||
unsafe fn get_unchecked(&mut self, i: usize) -> &'a [T] {
|
||||
from_raw_parts(self.v.as_ptr().offset(i as isize), self.size)
|
||||
from_raw_parts(self.v.as_ptr().add(i), self.size)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
@ -3474,7 +3474,7 @@ unsafe impl<'a, T> TrustedRandomAccess for Chunks<'a, T> {
|
|||
None => self.v.len(),
|
||||
Some(end) => cmp::min(end, self.v.len()),
|
||||
};
|
||||
from_raw_parts(self.v.as_ptr().offset(start as isize), end - start)
|
||||
from_raw_parts(self.v.as_ptr().add(start), end - start)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
@ -3593,7 +3593,7 @@ unsafe impl<'a, T> TrustedRandomAccess for ChunksMut<'a, T> {
|
|||
None => self.v.len(),
|
||||
Some(end) => cmp::min(end, self.v.len()),
|
||||
};
|
||||
from_raw_parts_mut(self.v.as_mut_ptr().offset(start as isize), end - start)
|
||||
from_raw_parts_mut(self.v.as_mut_ptr().add(start), end - start)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
@ -3716,7 +3716,7 @@ impl<'a, T> FusedIterator for ExactChunks<'a, T> {}
|
|||
unsafe impl<'a, T> TrustedRandomAccess for ExactChunks<'a, T> {
|
||||
unsafe fn get_unchecked(&mut self, i: usize) -> &'a [T] {
|
||||
let start = i * self.chunk_size;
|
||||
from_raw_parts(self.v.as_ptr().offset(start as isize), self.chunk_size)
|
||||
from_raw_parts(self.v.as_ptr().add(start), self.chunk_size)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
@ -3831,7 +3831,7 @@ impl<'a, T> FusedIterator for ExactChunksMut<'a, T> {}
|
|||
unsafe impl<'a, T> TrustedRandomAccess for ExactChunksMut<'a, T> {
|
||||
unsafe fn get_unchecked(&mut self, i: usize) -> &'a mut [T] {
|
||||
let start = i * self.chunk_size;
|
||||
from_raw_parts_mut(self.v.as_mut_ptr().offset(start as isize), self.chunk_size)
|
||||
from_raw_parts_mut(self.v.as_mut_ptr().add(start), self.chunk_size)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
@ -4116,7 +4116,7 @@ impl_marker_for!(BytewiseEquality,
|
|||
#[doc(hidden)]
|
||||
unsafe impl<'a, T> TrustedRandomAccess for Iter<'a, T> {
|
||||
unsafe fn get_unchecked(&mut self, i: usize) -> &'a T {
|
||||
&*self.ptr.offset(i as isize)
|
||||
&*self.ptr.add(i)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
@ -4124,7 +4124,7 @@ unsafe impl<'a, T> TrustedRandomAccess for Iter<'a, T> {
|
|||
#[doc(hidden)]
|
||||
unsafe impl<'a, T> TrustedRandomAccess for IterMut<'a, T> {
|
||||
unsafe fn get_unchecked(&mut self, i: usize) -> &'a mut T {
|
||||
&mut *self.ptr.offset(i as isize)
|
||||
&mut *self.ptr.add(i)
|
||||
}
|
||||
fn may_have_side_effect() -> bool { false }
|
||||
}
|
||||
|
|
|
@ -77,8 +77,8 @@ pub unsafe fn ptr_rotate<T>(mut left: usize, mid: *mut T, mut right: usize) {
|
|||
}
|
||||
|
||||
ptr::swap_nonoverlapping(
|
||||
mid.offset(-(left as isize)),
|
||||
mid.offset((right-delta) as isize),
|
||||
mid.sub(left),
|
||||
mid.add(right - delta),
|
||||
delta);
|
||||
|
||||
if left <= right {
|
||||
|
@ -91,15 +91,15 @@ pub unsafe fn ptr_rotate<T>(mut left: usize, mid: *mut T, mut right: usize) {
|
|||
let rawarray = RawArray::new();
|
||||
let buf = rawarray.ptr();
|
||||
|
||||
let dim = mid.offset(-(left as isize)).offset(right as isize);
|
||||
let dim = mid.sub(left).add(right);
|
||||
if left <= right {
|
||||
ptr::copy_nonoverlapping(mid.offset(-(left as isize)), buf, left);
|
||||
ptr::copy(mid, mid.offset(-(left as isize)), right);
|
||||
ptr::copy_nonoverlapping(mid.sub(left), buf, left);
|
||||
ptr::copy(mid, mid.sub(left), right);
|
||||
ptr::copy_nonoverlapping(buf, dim, left);
|
||||
}
|
||||
else {
|
||||
ptr::copy_nonoverlapping(mid, buf, right);
|
||||
ptr::copy(mid.offset(-(left as isize)), dim, left);
|
||||
ptr::copy_nonoverlapping(buf, mid.offset(-(left as isize)), right);
|
||||
ptr::copy(mid.sub(left), dim, left);
|
||||
ptr::copy_nonoverlapping(buf, mid.sub(left), right);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -221,15 +221,15 @@ fn partition_in_blocks<T, F>(v: &mut [T], pivot: &T, is_less: &mut F) -> usize
|
|||
// 3. `end` - End pointer into the `offsets` array.
|
||||
// 4. `offsets - Indices of out-of-order elements within the block.
|
||||
|
||||
// The current block on the left side (from `l` to `l.offset(block_l)`).
|
||||
// The current block on the left side (from `l` to `l.add(block_l)`).
|
||||
let mut l = v.as_mut_ptr();
|
||||
let mut block_l = BLOCK;
|
||||
let mut start_l = ptr::null_mut();
|
||||
let mut end_l = ptr::null_mut();
|
||||
let mut offsets_l: [u8; BLOCK] = unsafe { mem::uninitialized() };
|
||||
|
||||
// The current block on the right side (from `r.offset(-block_r)` to `r`).
|
||||
let mut r = unsafe { l.offset(v.len() as isize) };
|
||||
// The current block on the right side (from `r.sub(block_r)` to `r`).
|
||||
let mut r = unsafe { l.add(v.len()) };
|
||||
let mut block_r = BLOCK;
|
||||
let mut start_r = ptr::null_mut();
|
||||
let mut end_r = ptr::null_mut();
|
||||
|
|
|
@ -1518,12 +1518,12 @@ fn run_utf8_validation(v: &[u8]) -> Result<(), Utf8Error> {
|
|||
let ptr = v.as_ptr();
|
||||
let align = unsafe {
|
||||
// the offset is safe, because `index` is guaranteed inbounds
|
||||
ptr.offset(index as isize).align_offset(usize_bytes)
|
||||
ptr.add(index).align_offset(usize_bytes)
|
||||
};
|
||||
if align == 0 {
|
||||
while index < blocks_end {
|
||||
unsafe {
|
||||
let block = ptr.offset(index as isize) as *const usize;
|
||||
let block = ptr.add(index) as *const usize;
|
||||
// break if there is a nonascii byte
|
||||
let zu = contains_nonascii(*block);
|
||||
let zv = contains_nonascii(*block.offset(1));
|
||||
|
@ -1878,13 +1878,13 @@ mod traits {
|
|||
}
|
||||
#[inline]
|
||||
unsafe fn get_unchecked(self, slice: &str) -> &Self::Output {
|
||||
let ptr = slice.as_ptr().offset(self.start as isize);
|
||||
let ptr = slice.as_ptr().add(self.start);
|
||||
let len = self.end - self.start;
|
||||
super::from_utf8_unchecked(slice::from_raw_parts(ptr, len))
|
||||
}
|
||||
#[inline]
|
||||
unsafe fn get_unchecked_mut(self, slice: &mut str) -> &mut Self::Output {
|
||||
let ptr = slice.as_ptr().offset(self.start as isize);
|
||||
let ptr = slice.as_ptr().add(self.start);
|
||||
let len = self.end - self.start;
|
||||
super::from_utf8_unchecked_mut(slice::from_raw_parts_mut(ptr as *mut u8, len))
|
||||
}
|
||||
|
@ -1973,13 +1973,13 @@ mod traits {
|
|||
}
|
||||
#[inline]
|
||||
unsafe fn get_unchecked(self, slice: &str) -> &Self::Output {
|
||||
let ptr = slice.as_ptr().offset(self.start as isize);
|
||||
let ptr = slice.as_ptr().add(self.start);
|
||||
let len = slice.len() - self.start;
|
||||
super::from_utf8_unchecked(slice::from_raw_parts(ptr, len))
|
||||
}
|
||||
#[inline]
|
||||
unsafe fn get_unchecked_mut(self, slice: &mut str) -> &mut Self::Output {
|
||||
let ptr = slice.as_ptr().offset(self.start as isize);
|
||||
let ptr = slice.as_ptr().add(self.start);
|
||||
let len = slice.len() - self.start;
|
||||
super::from_utf8_unchecked_mut(slice::from_raw_parts_mut(ptr as *mut u8, len))
|
||||
}
|
||||
|
@ -2573,7 +2573,7 @@ impl str {
|
|||
unsafe {
|
||||
(from_utf8_unchecked_mut(slice::from_raw_parts_mut(ptr, mid)),
|
||||
from_utf8_unchecked_mut(slice::from_raw_parts_mut(
|
||||
ptr.offset(mid as isize),
|
||||
ptr.add(mid),
|
||||
len - mid
|
||||
)))
|
||||
}
|
||||
|
|
|
@ -154,7 +154,7 @@ pub struct Parser<'a> {
|
|||
style: Option<usize>,
|
||||
/// How many newlines have been seen in the string so far, to adjust the error spans
|
||||
seen_newlines: usize,
|
||||
/// Start and end byte offset of every successfuly parsed argument
|
||||
/// Start and end byte offset of every successfully parsed argument
|
||||
pub arg_places: Vec<(usize, usize)>,
|
||||
}
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ impl DwarfReader {
|
|||
// telling the backend to generate "misalignment-safe" code.
|
||||
pub unsafe fn read<T: Copy>(&mut self) -> T {
|
||||
let Unaligned(result) = *(self.ptr as *const Unaligned<T>);
|
||||
self.ptr = self.ptr.offset(mem::size_of::<T>() as isize);
|
||||
self.ptr = self.ptr.add(mem::size_of::<T>());
|
||||
result
|
||||
}
|
||||
|
||||
|
|
|
@ -142,7 +142,7 @@ mod imp {
|
|||
|
||||
#[repr(C)]
|
||||
pub struct _ThrowInfo {
|
||||
pub attribues: c_uint,
|
||||
pub attributes: c_uint,
|
||||
pub pnfnUnwind: imp::ptr_t,
|
||||
pub pForwardCompat: imp::ptr_t,
|
||||
pub pCatchableTypeArray: imp::ptr_t,
|
||||
|
@ -178,7 +178,7 @@ pub struct _TypeDescriptor {
|
|||
}
|
||||
|
||||
static mut THROW_INFO: _ThrowInfo = _ThrowInfo {
|
||||
attribues: 0,
|
||||
attributes: 0,
|
||||
pnfnUnwind: ptr!(0),
|
||||
pForwardCompat: ptr!(0),
|
||||
pCatchableTypeArray: ptr!(0),
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
//!
|
||||
//! This library, provided by the standard distribution, provides the types
|
||||
//! consumed in the interfaces of procedurally defined macro definitions such as
|
||||
//! function-like macros `#[proc_macro]`, macro attribures `#[proc_macro_attribute]` and
|
||||
//! function-like macros `#[proc_macro]`, macro attributes `#[proc_macro_attribute]` and
|
||||
//! custom derive attributes`#[proc_macro_derive]`.
|
||||
//!
|
||||
//! Note that this crate is intentionally bare-bones currently.
|
||||
|
|
|
@ -49,7 +49,7 @@ pub mod query_result;
|
|||
mod substitute;
|
||||
|
||||
/// A "canonicalized" type `V` is one where all free inference
|
||||
/// variables have been rewriten to "canonical vars". These are
|
||||
/// variables have been rewritten to "canonical vars". These are
|
||||
/// numbered starting from 0 in order of first appearance.
|
||||
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, RustcDecodable, RustcEncodable)]
|
||||
pub struct Canonical<'gcx, V> {
|
||||
|
|
|
@ -561,7 +561,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
|
|||
value.push_highlighted("<");
|
||||
}
|
||||
|
||||
// Output the lifetimes fot the first type
|
||||
// Output the lifetimes for the first type
|
||||
let lifetimes = sub.regions()
|
||||
.map(|lifetime| {
|
||||
let s = lifetime.to_string();
|
||||
|
|
|
@ -527,7 +527,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
|
|||
* we're not careful, it will succeed.
|
||||
*
|
||||
* The reason is that when we walk through the subtyping
|
||||
* algorith, we begin by replacing `'a` with a skolemized
|
||||
* algorithm, we begin by replacing `'a` with a skolemized
|
||||
* variable `'1`. We then have `fn(_#0t) <: fn(&'1 int)`. This
|
||||
* can be made true by unifying `_#0t` with `&'1 int`. In the
|
||||
* process, we create a fresh variable for the skolemized
|
||||
|
|
|
@ -47,7 +47,7 @@
|
|||
#![feature(drain_filter)]
|
||||
#![feature(iterator_find_map)]
|
||||
#![cfg_attr(windows, feature(libc))]
|
||||
#![feature(macro_vis_matcher)]
|
||||
#![cfg_attr(stage0, feature(macro_vis_matcher))]
|
||||
#![feature(never_type)]
|
||||
#![feature(exhaustive_patterns)]
|
||||
#![feature(extern_types)]
|
||||
|
|
|
@ -42,11 +42,6 @@ pub use self::NativeLibraryKind::*;
|
|||
|
||||
// lonely orphan structs and enums looking for a better home
|
||||
|
||||
#[derive(Clone, Debug, Copy)]
|
||||
pub struct LinkMeta {
|
||||
pub crate_hash: Svh,
|
||||
}
|
||||
|
||||
/// Where a crate came from on the local filesystem. One of these three options
|
||||
/// must be non-None.
|
||||
#[derive(PartialEq, Clone, Debug)]
|
||||
|
@ -233,8 +228,7 @@ pub trait CrateStore {
|
|||
|
||||
// utility functions
|
||||
fn encode_metadata<'a, 'tcx>(&self,
|
||||
tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
||||
link_meta: &LinkMeta)
|
||||
tcx: TyCtxt<'a, 'tcx, 'tcx>)
|
||||
-> EncodedMetadata;
|
||||
fn metadata_encoding_version(&self) -> &[u8];
|
||||
}
|
||||
|
|
|
@ -68,7 +68,7 @@ impl<'tcx> ConstValue<'tcx> {
|
|||
|
||||
/// A `Value` represents a single self-contained Rust value.
|
||||
///
|
||||
/// A `Value` can either refer to a block of memory inside an allocation (`ByRef`) or to a primitve
|
||||
/// A `Value` can either refer to a block of memory inside an allocation (`ByRef`) or to a primitive
|
||||
/// value held directly, outside of any allocation (`Scalar`). For `ByRef`-values, we remember
|
||||
/// whether the pointer is supposed to be aligned or not (also see Place).
|
||||
///
|
||||
|
|
|
@ -927,11 +927,11 @@ pub enum TerminatorKind<'tcx> {
|
|||
/// Drop(P, goto BB1, unwind BB2)
|
||||
/// }
|
||||
/// BB1 {
|
||||
/// // P is now unitialized
|
||||
/// // P is now uninitialized
|
||||
/// P <- V
|
||||
/// }
|
||||
/// BB2 {
|
||||
/// // P is now unitialized -- its dtor panicked
|
||||
/// // P is now uninitialized -- its dtor panicked
|
||||
/// P <- V
|
||||
/// }
|
||||
/// ```
|
||||
|
|
|
@ -171,7 +171,7 @@ impl<'a, 'tcx> Postorder<'a, 'tcx> {
|
|||
// (A, [C])]
|
||||
//
|
||||
// Now that the top of the stack has no successors we can traverse, each item will
|
||||
// be popped off during iteration until we get back to `A`. This yeilds [E, D, B].
|
||||
// be popped off during iteration until we get back to `A`. This yields [E, D, B].
|
||||
//
|
||||
// When we yield `B` and call `traverse_successor`, we push `C` to the stack, but
|
||||
// since we've already visited `E`, that child isn't added to the stack. The last
|
||||
|
|
|
@ -264,12 +264,12 @@ impl<'a, 'tcx> AutoTraitFinder<'a, 'tcx> {
|
|||
// The core logic responsible for computing the bounds for our synthesized impl.
|
||||
//
|
||||
// To calculate the bounds, we call SelectionContext.select in a loop. Like FulfillmentContext,
|
||||
// we recursively select the nested obligations of predicates we encounter. However, whenver we
|
||||
// we recursively select the nested obligations of predicates we encounter. However, whenever we
|
||||
// encounter an UnimplementedError involving a type parameter, we add it to our ParamEnv. Since
|
||||
// our goal is to determine when a particular type implements an auto trait, Unimplemented
|
||||
// errors tell us what conditions need to be met.
|
||||
//
|
||||
// This method ends up working somewhat similary to FulfillmentContext, but with a few key
|
||||
// This method ends up working somewhat similarly to FulfillmentContext, but with a few key
|
||||
// differences. FulfillmentContext works under the assumption that it's dealing with concrete
|
||||
// user code. According, it considers all possible ways that a Predicate could be met - which
|
||||
// isn't always what we want for a synthesized impl. For example, given the predicate 'T:
|
||||
|
@ -289,11 +289,11 @@ impl<'a, 'tcx> AutoTraitFinder<'a, 'tcx> {
|
|||
// we'll pick up any nested bounds, without ever inferring that 'T: IntoIterator' needs to
|
||||
// hold.
|
||||
//
|
||||
// One additonal consideration is supertrait bounds. Normally, a ParamEnv is only ever
|
||||
// One additional consideration is supertrait bounds. Normally, a ParamEnv is only ever
|
||||
// consutrcted once for a given type. As part of the construction process, the ParamEnv will
|
||||
// have any supertrait bounds normalized - e.g. if we have a type 'struct Foo<T: Copy>', the
|
||||
// ParamEnv will contain 'T: Copy' and 'T: Clone', since 'Copy: Clone'. When we construct our
|
||||
// own ParamEnv, we need to do this outselves, through traits::elaborate_predicates, or else
|
||||
// own ParamEnv, we need to do this ourselves, through traits::elaborate_predicates, or else
|
||||
// SelectionContext will choke on the missing predicates. However, this should never show up in
|
||||
// the final synthesized generics: we don't want our generated docs page to contain something
|
||||
// like 'T: Copy + Clone', as that's redundant. Therefore, we keep track of a separate
|
||||
|
|
|
@ -652,7 +652,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
|
|||
}
|
||||
|
||||
// If this error is due to `!: Trait` not implemented but `(): Trait` is
|
||||
// implemented, and fallback has occured, then it could be due to a
|
||||
// implemented, and fallback has occurred, then it could be due to a
|
||||
// variable that used to fallback to `()` now falling back to `!`. Issue a
|
||||
// note informing about the change in behaviour.
|
||||
if trait_predicate.skip_binder().self_ty().is_never()
|
||||
|
|
|
@ -82,7 +82,7 @@ impl<'cx, 'gcx, 'tcx> At<'cx, 'gcx, 'tcx> {
|
|||
// Errors and ambiuity in dropck occur in two cases:
|
||||
// - unresolved inference variables at the end of typeck
|
||||
// - non well-formed types where projections cannot be resolved
|
||||
// Either of these should hvae created an error before.
|
||||
// Either of these should have created an error before.
|
||||
tcx.sess
|
||||
.delay_span_bug(span, "dtorck encountered internal error");
|
||||
return InferOk {
|
||||
|
|
|
@ -26,7 +26,7 @@ use lint::{self, Lint};
|
|||
use ich::{StableHashingContext, NodeIdHashingMode};
|
||||
use infer::canonical::{CanonicalVarInfo, CanonicalVarInfos};
|
||||
use infer::outlives::free_region_map::FreeRegionMap;
|
||||
use middle::cstore::{CrateStoreDyn, LinkMeta};
|
||||
use middle::cstore::CrateStoreDyn;
|
||||
use middle::cstore::EncodedMetadata;
|
||||
use middle::lang_items;
|
||||
use middle::resolve_lifetime::{self, ObjectLifetimeDefault};
|
||||
|
@ -892,7 +892,7 @@ pub struct GlobalCtxt<'tcx> {
|
|||
|
||||
pub(crate) queries: query::Queries<'tcx>,
|
||||
|
||||
// Records the free variables refrenced by every closure
|
||||
// Records the free variables referenced by every closure
|
||||
// expression. Do not track deps for this, just recompute it from
|
||||
// scratch every time.
|
||||
freevars: FxHashMap<DefId, Lrc<Vec<hir::Freevar>>>,
|
||||
|
@ -1490,10 +1490,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
|
|||
}
|
||||
|
||||
impl<'a, 'tcx> TyCtxt<'a, 'tcx, 'tcx> {
|
||||
pub fn encode_metadata(self, link_meta: &LinkMeta)
|
||||
pub fn encode_metadata(self)
|
||||
-> EncodedMetadata
|
||||
{
|
||||
self.cstore.encode_metadata(self, link_meta)
|
||||
self.cstore.encode_metadata(self)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1501,7 +1501,7 @@ impl UniverseIndex {
|
|||
|
||||
/// Creates a universe index from the given integer. Not to be
|
||||
/// used lightly lest you pick a bad value. But sometimes we
|
||||
/// convert universe indicies into integers and back for various
|
||||
/// convert universe indices into integers and back for various
|
||||
/// reasons.
|
||||
pub fn from_u32(index: u32) -> Self {
|
||||
UniverseIndex(index)
|
||||
|
|
|
@ -262,7 +262,7 @@ where
|
|||
}
|
||||
}
|
||||
|
||||
// Visit the explict waiters which use condvars and are resumable
|
||||
// Visit the explicit waiters which use condvars and are resumable
|
||||
for (i, waiter) in query.latch.info.lock().waiters.iter().enumerate() {
|
||||
if let Some(ref waiter_query) = waiter.query {
|
||||
if visit(waiter.span, waiter_query.clone()).is_some() {
|
||||
|
|
|
@ -47,8 +47,7 @@ use std::str;
|
|||
use syntax::attr;
|
||||
|
||||
pub use rustc_codegen_utils::link::{find_crate_name, filename_for_input, default_output_for_target,
|
||||
invalid_output_for_target, build_link_meta, out_filename,
|
||||
check_file_is_writeable};
|
||||
invalid_output_for_target, out_filename, check_file_is_writeable};
|
||||
|
||||
// The third parameter is for env vars, used on windows to set up the
|
||||
// path for MSVC to find its DLLs, and gcc to find its bundled
|
||||
|
|
|
@ -19,7 +19,7 @@ use base;
|
|||
use consts;
|
||||
use rustc_incremental::{copy_cgu_workproducts_to_incr_comp_cache_dir, in_incr_comp_dir};
|
||||
use rustc::dep_graph::{WorkProduct, WorkProductId, WorkProductFileKind};
|
||||
use rustc::middle::cstore::{LinkMeta, EncodedMetadata};
|
||||
use rustc::middle::cstore::EncodedMetadata;
|
||||
use rustc::session::config::{self, OutputFilenames, OutputType, Passes, Sanitizer, Lto};
|
||||
use rustc::session::Session;
|
||||
use rustc::util::nodemap::FxHashMap;
|
||||
|
@ -32,6 +32,7 @@ use rustc::ty::TyCtxt;
|
|||
use rustc::util::common::{time_ext, time_depth, set_time_depth, print_time_passes_entry};
|
||||
use rustc_fs_util::{path2cstr, link_or_copy};
|
||||
use rustc_data_structures::small_c_str::SmallCStr;
|
||||
use rustc_data_structures::svh::Svh;
|
||||
use errors::{self, Handler, Level, DiagnosticBuilder, FatalError, DiagnosticId};
|
||||
use errors::emitter::{Emitter};
|
||||
use syntax::attr;
|
||||
|
@ -327,7 +328,7 @@ struct AssemblerCommand {
|
|||
/// Additional resources used by optimize_and_codegen (not module specific)
|
||||
#[derive(Clone)]
|
||||
pub struct CodegenContext {
|
||||
// Resouces needed when running LTO
|
||||
// Resources needed when running LTO
|
||||
pub time_passes: bool,
|
||||
pub lto: Lto,
|
||||
pub no_landing_pads: bool,
|
||||
|
@ -595,7 +596,7 @@ unsafe fn optimize(cgcx: &CodegenContext,
|
|||
-C passes=name-anon-globals to the compiler command line.");
|
||||
} else {
|
||||
bug!("We are using thin LTO buffers without running the NameAnonGlobals pass. \
|
||||
This will likely cause errors in LLVM and shoud never happen.");
|
||||
This will likely cause errors in LLVM and should never happen.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -912,13 +913,13 @@ fn need_crate_bitcode_for_rlib(sess: &Session) -> bool {
|
|||
|
||||
pub fn start_async_codegen(tcx: TyCtxt,
|
||||
time_graph: Option<TimeGraph>,
|
||||
link: LinkMeta,
|
||||
metadata: EncodedMetadata,
|
||||
coordinator_receive: Receiver<Box<dyn Any + Send>>,
|
||||
total_cgus: usize)
|
||||
-> OngoingCodegen {
|
||||
let sess = tcx.sess;
|
||||
let crate_name = tcx.crate_name(LOCAL_CRATE);
|
||||
let crate_hash = tcx.crate_hash(LOCAL_CRATE);
|
||||
let no_builtins = attr::contains_name(&tcx.hir.krate().attrs, "no_builtins");
|
||||
let subsystem = attr::first_attr_value_str_by_name(&tcx.hir.krate().attrs,
|
||||
"windows_subsystem");
|
||||
|
@ -1037,7 +1038,7 @@ pub fn start_async_codegen(tcx: TyCtxt,
|
|||
|
||||
OngoingCodegen {
|
||||
crate_name,
|
||||
link,
|
||||
crate_hash,
|
||||
metadata,
|
||||
windows_subsystem,
|
||||
linker_info,
|
||||
|
@ -2270,7 +2271,7 @@ impl SharedEmitterMain {
|
|||
|
||||
pub struct OngoingCodegen {
|
||||
crate_name: Symbol,
|
||||
link: LinkMeta,
|
||||
crate_hash: Svh,
|
||||
metadata: EncodedMetadata,
|
||||
windows_subsystem: Option<String>,
|
||||
linker_info: LinkerInfo,
|
||||
|
@ -2321,7 +2322,7 @@ impl OngoingCodegen {
|
|||
|
||||
(CodegenResults {
|
||||
crate_name: self.crate_name,
|
||||
link: self.link,
|
||||
crate_hash: self.crate_hash,
|
||||
metadata: self.metadata,
|
||||
windows_subsystem: self.windows_subsystem,
|
||||
linker_info: self.linker_info,
|
||||
|
|
|
@ -29,7 +29,6 @@ use super::ModuleCodegen;
|
|||
use super::ModuleKind;
|
||||
|
||||
use abi;
|
||||
use back::link;
|
||||
use back::write::{self, OngoingCodegen};
|
||||
use llvm::{self, TypeKind, get_param};
|
||||
use metadata;
|
||||
|
@ -42,7 +41,7 @@ use rustc::ty::{self, Ty, TyCtxt};
|
|||
use rustc::ty::layout::{self, Align, TyLayout, LayoutOf};
|
||||
use rustc::ty::query::Providers;
|
||||
use rustc::dep_graph::{DepNode, DepConstructor};
|
||||
use rustc::middle::cstore::{self, LinkMeta, LinkagePreference};
|
||||
use rustc::middle::cstore::{self, LinkagePreference};
|
||||
use rustc::middle::exported_symbols;
|
||||
use rustc::util::common::{time, print_time_passes_entry};
|
||||
use rustc::util::profiling::ProfileCategory;
|
||||
|
@ -608,8 +607,7 @@ fn maybe_create_entry_wrapper(cx: &CodegenCx) {
|
|||
}
|
||||
|
||||
fn write_metadata<'a, 'gcx>(tcx: TyCtxt<'a, 'gcx, 'gcx>,
|
||||
llvm_module: &ModuleLlvm,
|
||||
link_meta: &LinkMeta)
|
||||
llvm_module: &ModuleLlvm)
|
||||
-> EncodedMetadata {
|
||||
use std::io::Write;
|
||||
use flate2::Compression;
|
||||
|
@ -641,7 +639,7 @@ fn write_metadata<'a, 'gcx>(tcx: TyCtxt<'a, 'gcx, 'gcx>,
|
|||
return EncodedMetadata::new();
|
||||
}
|
||||
|
||||
let metadata = tcx.encode_metadata(link_meta);
|
||||
let metadata = tcx.encode_metadata();
|
||||
if kind == MetadataKind::Uncompressed {
|
||||
return metadata;
|
||||
}
|
||||
|
@ -719,8 +717,6 @@ pub fn codegen_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
tcx.sess.fatal("this compiler's LLVM does not support PGO");
|
||||
}
|
||||
|
||||
let crate_hash = tcx.crate_hash(LOCAL_CRATE);
|
||||
let link_meta = link::build_link_meta(crate_hash);
|
||||
let cgu_name_builder = &mut CodegenUnitNameBuilder::new(tcx);
|
||||
|
||||
// Codegen the metadata.
|
||||
|
@ -732,7 +728,7 @@ pub fn codegen_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
.to_string();
|
||||
let metadata_llvm_module = ModuleLlvm::new(tcx.sess, &metadata_cgu_name);
|
||||
let metadata = time(tcx.sess, "write metadata", || {
|
||||
write_metadata(tcx, &metadata_llvm_module, &link_meta)
|
||||
write_metadata(tcx, &metadata_llvm_module)
|
||||
});
|
||||
tcx.sess.profiler(|p| p.end_activity(ProfileCategory::Codegen));
|
||||
|
||||
|
@ -754,7 +750,6 @@ pub fn codegen_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
let ongoing_codegen = write::start_async_codegen(
|
||||
tcx,
|
||||
time_graph.clone(),
|
||||
link_meta,
|
||||
metadata,
|
||||
rx,
|
||||
1);
|
||||
|
@ -789,7 +784,6 @@ pub fn codegen_crate<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
let ongoing_codegen = write::start_async_codegen(
|
||||
tcx,
|
||||
time_graph.clone(),
|
||||
link_meta,
|
||||
metadata,
|
||||
rx,
|
||||
codegen_units.len());
|
||||
|
|
|
@ -88,6 +88,7 @@ use rustc::util::nodemap::{FxHashSet, FxHashMap};
|
|||
use rustc::util::profiling::ProfileCategory;
|
||||
use rustc_mir::monomorphize;
|
||||
use rustc_codegen_utils::codegen_backend::CodegenBackend;
|
||||
use rustc_data_structures::svh::Svh;
|
||||
|
||||
mod diagnostics;
|
||||
|
||||
|
@ -251,7 +252,7 @@ impl CodegenBackend for LlvmCodegenBackend {
|
|||
|
||||
// Now that we won't touch anything in the incremental compilation directory
|
||||
// any more, we can finalize it (which involves renaming it)
|
||||
rustc_incremental::finalize_session_directory(sess, ongoing_codegen.link.crate_hash);
|
||||
rustc_incremental::finalize_session_directory(sess, ongoing_codegen.crate_hash);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
@ -389,7 +390,7 @@ struct CodegenResults {
|
|||
modules: Vec<CompiledModule>,
|
||||
allocator_module: Option<CompiledModule>,
|
||||
metadata_module: CompiledModule,
|
||||
link: rustc::middle::cstore::LinkMeta,
|
||||
crate_hash: Svh,
|
||||
metadata: rustc::middle::cstore::EncodedMetadata,
|
||||
windows_subsystem: Option<String>,
|
||||
linker_info: back::linker::LinkerInfo,
|
||||
|
|
|
@ -656,7 +656,7 @@ impl FunctionCx<'a, 'll, 'tcx> {
|
|||
llargs.push(b);
|
||||
return;
|
||||
}
|
||||
_ => bug!("codegen_argument: {:?} invalid for pair arugment", op)
|
||||
_ => bug!("codegen_argument: {:?} invalid for pair argument", op)
|
||||
}
|
||||
} else if arg.is_unsized_indirect() {
|
||||
match op.val {
|
||||
|
|
|
@ -44,7 +44,7 @@ use rustc::dep_graph::DepGraph;
|
|||
use rustc_target::spec::Target;
|
||||
use rustc_data_structures::fx::FxHashMap;
|
||||
use rustc_mir::monomorphize::collector;
|
||||
use link::{build_link_meta, out_filename};
|
||||
use link::out_filename;
|
||||
|
||||
pub use rustc_data_structures::sync::MetadataRef;
|
||||
|
||||
|
@ -180,8 +180,7 @@ impl CodegenBackend for MetadataOnlyCodegenBackend {
|
|||
}
|
||||
tcx.sess.abort_if_errors();
|
||||
|
||||
let link_meta = build_link_meta(tcx.crate_hash(LOCAL_CRATE));
|
||||
let metadata = tcx.encode_metadata(&link_meta);
|
||||
let metadata = tcx.encode_metadata();
|
||||
|
||||
box OngoingCodegen {
|
||||
metadata: metadata,
|
||||
|
|
|
@ -10,8 +10,6 @@
|
|||
|
||||
use rustc::session::config::{self, OutputFilenames, Input, OutputType};
|
||||
use rustc::session::Session;
|
||||
use rustc::middle::cstore::LinkMeta;
|
||||
use rustc_data_structures::svh::Svh;
|
||||
use std::path::{Path, PathBuf};
|
||||
use syntax::{ast, attr};
|
||||
use syntax_pos::Span;
|
||||
|
@ -50,14 +48,6 @@ fn is_writeable(p: &Path) -> bool {
|
|||
}
|
||||
}
|
||||
|
||||
pub fn build_link_meta(crate_hash: Svh) -> LinkMeta {
|
||||
let r = LinkMeta {
|
||||
crate_hash,
|
||||
};
|
||||
info!("{:?}", r);
|
||||
return r;
|
||||
}
|
||||
|
||||
pub fn find_crate_name(sess: Option<&Session>,
|
||||
attrs: &[ast::Attribute],
|
||||
input: &Input) -> String {
|
||||
|
|
|
@ -139,7 +139,7 @@ impl<A: Array> ArrayVec<A> {
|
|||
// whole Drain iterator (like &mut T).
|
||||
let range_slice = {
|
||||
let arr = &mut self.values as &mut [ManuallyDrop<<A as Array>::Element>];
|
||||
slice::from_raw_parts_mut(arr.as_mut_ptr().offset(start as isize),
|
||||
slice::from_raw_parts_mut(arr.as_mut_ptr().add(start),
|
||||
end - start)
|
||||
};
|
||||
Drain {
|
||||
|
@ -262,8 +262,8 @@ impl<'a, A: Array> Drop for Drain<'a, A> {
|
|||
{
|
||||
let arr =
|
||||
&mut source_array_vec.values as &mut [ManuallyDrop<<A as Array>::Element>];
|
||||
let src = arr.as_ptr().offset(tail as isize);
|
||||
let dst = arr.as_mut_ptr().offset(start as isize);
|
||||
let src = arr.as_ptr().add(tail);
|
||||
let dst = arr.as_mut_ptr().add(start);
|
||||
ptr::copy(src, dst, self.tail_len);
|
||||
};
|
||||
source_array_vec.set_len(start + self.tail_len);
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
#![feature(unsize)]
|
||||
#![feature(specialization)]
|
||||
#![feature(optin_builtin_traits)]
|
||||
#![feature(macro_vis_matcher)]
|
||||
#![cfg_attr(stage0, feature(macro_vis_matcher))]
|
||||
#![cfg_attr(not(stage0), feature(nll))]
|
||||
#![feature(allow_internal_unstable)]
|
||||
#![feature(vec_resize_with)]
|
||||
|
|
|
@ -125,7 +125,7 @@ impl<A: Array> SmallVec<A> {
|
|||
// infallible
|
||||
// The spot to put the new value
|
||||
{
|
||||
let p = self.as_mut_ptr().offset(index as isize);
|
||||
let p = self.as_mut_ptr().add(index);
|
||||
// Shift everything over to make space. (Duplicating the
|
||||
// `index`th element into two consecutive places.)
|
||||
ptr::copy(p, p.offset(1), len - index);
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
//!
|
||||
//! `MTLock` is a mutex which disappears if cfg!(parallel_queries) is false.
|
||||
//!
|
||||
//! `MTRef` is a immutable refernce if cfg!(parallel_queries), and an mutable reference otherwise.
|
||||
//! `MTRef` is a immutable reference if cfg!(parallel_queries), and an mutable reference otherwise.
|
||||
//!
|
||||
//! `rustc_erase_owner!` erases a OwningRef owner into Erased or Erased + Send + Sync
|
||||
//! depending on the value of cfg!(parallel_queries).
|
||||
|
@ -432,7 +432,7 @@ impl<T> Once<T> {
|
|||
/// closures may concurrently be computing a value which the inner value should take.
|
||||
/// Only one of these closures are used to actually initialize the value.
|
||||
/// If some other closure already set the value, we assert that it our closure computed
|
||||
/// a value equal to the value aready set and then
|
||||
/// a value equal to the value already set and then
|
||||
/// we return the value our closure computed wrapped in a `Option`.
|
||||
/// If our closure set the value, `None` is returned.
|
||||
/// If the value is already initialized, the closure is not called and `None` is returned.
|
||||
|
|
|
@ -889,7 +889,7 @@ impl<'a, 'tcx> LateLintPass<'a, 'tcx> for UnconditionalRecursion {
|
|||
// NB. this has an edge case with non-returning statements,
|
||||
// like `loop {}` or `panic!()`: control flow never reaches
|
||||
// the exit node through these, so one can have a function
|
||||
// that never actually calls itselfs but is still picked up by
|
||||
// that never actually calls itself but is still picked up by
|
||||
// this lint:
|
||||
//
|
||||
// fn f(cond: bool) {
|
||||
|
|
|
@ -26,7 +26,7 @@
|
|||
#![cfg_attr(test, feature(test))]
|
||||
#![feature(box_patterns)]
|
||||
#![feature(box_syntax)]
|
||||
#![feature(macro_vis_matcher)]
|
||||
#![cfg_attr(stage0, feature(macro_vis_matcher))]
|
||||
#![cfg_attr(not(stage0), feature(nll))]
|
||||
#![feature(quote)]
|
||||
#![feature(rustc_diagnostic_macros)]
|
||||
|
|
|
@ -486,7 +486,7 @@ impl<'a, 'tcx> ImproperCTypesVisitor<'a, 'tcx> {
|
|||
// Protect against infinite recursion, for example
|
||||
// `struct S(*mut S);`.
|
||||
// FIXME: A recursion limit is necessary as well, for irregular
|
||||
// recusive types.
|
||||
// recursive types.
|
||||
if !cache.insert(ty) {
|
||||
return FfiSafe;
|
||||
}
|
||||
|
|
|
@ -17,7 +17,6 @@ use schema;
|
|||
|
||||
use rustc::ty::query::QueryConfig;
|
||||
use rustc::middle::cstore::{CrateStore, DepKind,
|
||||
LinkMeta,
|
||||
EncodedMetadata, NativeLibraryKind};
|
||||
use rustc::middle::exported_symbols::ExportedSymbol;
|
||||
use rustc::middle::stability::DeprecationEntry;
|
||||
|
@ -567,11 +566,10 @@ impl CrateStore for cstore::CStore {
|
|||
}
|
||||
|
||||
fn encode_metadata<'a, 'tcx>(&self,
|
||||
tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
||||
link_meta: &LinkMeta)
|
||||
tcx: TyCtxt<'a, 'tcx, 'tcx>)
|
||||
-> EncodedMetadata
|
||||
{
|
||||
encoder::encode_metadata(tcx, link_meta)
|
||||
encoder::encode_metadata(tcx)
|
||||
}
|
||||
|
||||
fn metadata_encoding_version(&self) -> &[u8]
|
||||
|
|
|
@ -13,7 +13,7 @@ use index_builder::{FromId, IndexBuilder, Untracked};
|
|||
use isolated_encoder::IsolatedEncoder;
|
||||
use schema::*;
|
||||
|
||||
use rustc::middle::cstore::{LinkMeta, LinkagePreference, NativeLibrary,
|
||||
use rustc::middle::cstore::{LinkagePreference, NativeLibrary,
|
||||
EncodedMetadata, ForeignModule};
|
||||
use rustc::hir::def::CtorKind;
|
||||
use rustc::hir::def_id::{CrateNum, CRATE_DEF_INDEX, DefIndex, DefId, LocalDefId, LOCAL_CRATE};
|
||||
|
@ -52,7 +52,6 @@ use rustc::hir::intravisit;
|
|||
pub struct EncodeContext<'a, 'tcx: 'a> {
|
||||
opaque: opaque::Encoder,
|
||||
pub tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
||||
link_meta: &'a LinkMeta,
|
||||
|
||||
lazy_state: LazyState,
|
||||
type_shorthands: FxHashMap<Ty<'tcx>, usize>,
|
||||
|
@ -482,7 +481,6 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> {
|
|||
let index_bytes = self.position() - i;
|
||||
|
||||
let attrs = tcx.hir.krate_attrs();
|
||||
let link_meta = self.link_meta;
|
||||
let is_proc_macro = tcx.sess.crate_types.borrow().contains(&CrateType::ProcMacro);
|
||||
let has_default_lib_allocator = attr::contains_name(&attrs, "default_lib_allocator");
|
||||
let has_global_allocator = *tcx.sess.has_global_allocator.get();
|
||||
|
@ -491,7 +489,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> {
|
|||
name: tcx.crate_name(LOCAL_CRATE),
|
||||
extra_filename: tcx.sess.opts.cg.extra_filename.clone(),
|
||||
triple: tcx.sess.opts.target_triple.clone(),
|
||||
hash: link_meta.crate_hash,
|
||||
hash: tcx.crate_hash(LOCAL_CRATE),
|
||||
disambiguator: tcx.sess.local_crate_disambiguator(),
|
||||
panic_strategy: tcx.sess.panic_strategy(),
|
||||
edition: hygiene::default_edition(),
|
||||
|
@ -1823,8 +1821,7 @@ impl<'a, 'tcx, 'v> ItemLikeVisitor<'v> for ImplVisitor<'a, 'tcx> {
|
|||
// will allow us to slice the metadata to the precise length that we just
|
||||
// generated regardless of trailing bytes that end up in it.
|
||||
|
||||
pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
||||
link_meta: &LinkMeta)
|
||||
pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>)
|
||||
-> EncodedMetadata
|
||||
{
|
||||
let mut encoder = opaque::Encoder::new(vec![]);
|
||||
|
@ -1837,7 +1834,6 @@ pub fn encode_metadata<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
let mut ecx = EncodeContext {
|
||||
opaque: encoder,
|
||||
tcx,
|
||||
link_meta,
|
||||
lazy_state: LazyState::NoNode,
|
||||
type_shorthands: Default::default(),
|
||||
predicate_shorthands: Default::default(),
|
||||
|
|
|
@ -207,7 +207,7 @@ impl<'a, 'tcx> Collector<'a, 'tcx> {
|
|||
}
|
||||
}
|
||||
|
||||
// Update kind and, optionally, the name of all native libaries
|
||||
// Update kind and, optionally, the name of all native libraries
|
||||
// (there may be more than one) with the specified name.
|
||||
for &(ref name, ref new_name, kind) in &self.tcx.sess.opts.libs {
|
||||
let mut found = false;
|
||||
|
|
|
@ -1080,7 +1080,10 @@ impl<'cx, 'gcx, 'tcx> MirBorrowckCtxt<'cx, 'gcx, 'tcx> {
|
|||
}
|
||||
}
|
||||
|
||||
if self
|
||||
// Check is_empty() first because it's the common case, and doing that
|
||||
// way we avoid the clone() call.
|
||||
if !self.access_place_error_reported.is_empty() &&
|
||||
self
|
||||
.access_place_error_reported
|
||||
.contains(&(place_span.0.clone(), place_span.1))
|
||||
{
|
||||
|
|
|
@ -541,7 +541,7 @@ impl<'cg, 'cx, 'tcx, 'gcx> InvalidationGenerator<'cg, 'cx, 'tcx, 'gcx> {
|
|||
// unique or mutable borrows are invalidated by writes.
|
||||
// Reservations count as writes since we need to check
|
||||
// that activating the borrow will be OK
|
||||
// TOOD(bob_twinkles) is this actually the right thing to do?
|
||||
// FIXME(bob_twinkles) is this actually the right thing to do?
|
||||
this.generate_invalidates(borrow_index, context.loc);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -783,7 +783,7 @@ impl<'a, 'gcx, 'tcx> TypeChecker<'a, 'gcx, 'tcx> {
|
|||
/// predicates, or otherwise uses the inference context, executes
|
||||
/// `op` and then executes all the further obligations that `op`
|
||||
/// returns. This will yield a set of outlives constraints amongst
|
||||
/// regions which are extracted and stored as having occured at
|
||||
/// regions which are extracted and stored as having occurred at
|
||||
/// `locations`.
|
||||
///
|
||||
/// **Any `rustc::infer` operations that might generate region
|
||||
|
|
|
@ -83,7 +83,7 @@ fn place_components_conflict<'gcx, 'tcx>(
|
|||
// Our invariant is, that at each step of the iteration:
|
||||
// - If we didn't run out of access to match, our borrow and access are comparable
|
||||
// and either equal or disjoint.
|
||||
// - If we did run out of accesss, the borrow can access a part of it.
|
||||
// - If we did run out of access, the borrow can access a part of it.
|
||||
loop {
|
||||
// loop invariant: borrow_c is always either equal to access_c or disjoint from it.
|
||||
if let Some(borrow_c) = borrow_components.next() {
|
||||
|
|
|
@ -605,7 +605,7 @@ pub trait BitDenotation: BitwiseOperator {
|
|||
/// `sets.on_entry` to that local clone into `statement_effect` and
|
||||
/// `terminator_effect`).
|
||||
///
|
||||
/// When its false, no local clone is constucted; instead a
|
||||
/// When it's false, no local clone is constructed; instead a
|
||||
/// reference directly into `on_entry` is passed along via
|
||||
/// `sets.on_entry` instead, which represents the flow state at
|
||||
/// the block's start, not necessarily the state immediately prior
|
||||
|
|
|
@ -462,7 +462,7 @@ impl<'a, 'mir, 'tcx: 'mir, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M
|
|||
self.tcx.normalize_erasing_regions(ty::ParamEnv::reveal_all(), substituted)
|
||||
}
|
||||
|
||||
/// Return the size and aligment of the value at the given type.
|
||||
/// Return the size and alignment of the value at the given type.
|
||||
/// Note that the value does not matter if the type is sized. For unsized types,
|
||||
/// the value has to be a fat pointer, and we only care about the "extra" data in it.
|
||||
pub fn size_and_align_of_dst(
|
||||
|
|
|
@ -599,7 +599,7 @@ impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> Memory<'a, 'mir, 'tcx, M> {
|
|||
Some(MemoryKind::Stack) => {},
|
||||
}
|
||||
if let Some(mut alloc) = alloc {
|
||||
// ensure llvm knows not to put this into immutable memroy
|
||||
// ensure llvm knows not to put this into immutable memory
|
||||
alloc.runtime_mutability = mutability;
|
||||
let alloc = self.tcx.intern_const_alloc(alloc);
|
||||
self.tcx.alloc_map.lock().set_id_memory(alloc_id, alloc);
|
||||
|
|
|
@ -26,7 +26,7 @@ Rust MIR: a lowered representation of Rust. Also: an experiment!
|
|||
#![feature(const_fn)]
|
||||
#![feature(core_intrinsics)]
|
||||
#![feature(decl_macro)]
|
||||
#![feature(macro_vis_matcher)]
|
||||
#![cfg_attr(stage0, feature(macro_vis_matcher))]
|
||||
#![feature(exhaustive_patterns)]
|
||||
#![feature(range_contains)]
|
||||
#![feature(rustc_diagnostic_macros)]
|
||||
|
|
|
@ -704,7 +704,7 @@ impl<'a, 'tcx> MutVisitor<'tcx> for Integrator<'a, 'tcx> {
|
|||
*unwind = Some(self.update_target(tgt));
|
||||
} else if !self.in_cleanup_block {
|
||||
// Unless this drop is in a cleanup block, add an unwind edge to
|
||||
// the orignal call's cleanup block
|
||||
// the original call's cleanup block
|
||||
*unwind = self.cleanup_block;
|
||||
}
|
||||
}
|
||||
|
@ -716,7 +716,7 @@ impl<'a, 'tcx> MutVisitor<'tcx> for Integrator<'a, 'tcx> {
|
|||
*cleanup = Some(self.update_target(tgt));
|
||||
} else if !self.in_cleanup_block {
|
||||
// Unless this call is in a cleanup block, add an unwind edge to
|
||||
// the orignal call's cleanup block
|
||||
// the original call's cleanup block
|
||||
*cleanup = self.cleanup_block;
|
||||
}
|
||||
}
|
||||
|
@ -726,7 +726,7 @@ impl<'a, 'tcx> MutVisitor<'tcx> for Integrator<'a, 'tcx> {
|
|||
*cleanup = Some(self.update_target(tgt));
|
||||
} else if !self.in_cleanup_block {
|
||||
// Unless this assert is in a cleanup block, add an unwind edge to
|
||||
// the orignal call's cleanup block
|
||||
// the original call's cleanup block
|
||||
*cleanup = self.cleanup_block;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -302,7 +302,7 @@ impl<'a, 'tcx> Promoter<'a, 'tcx> {
|
|||
let ref mut statement = blocks[loc.block].statements[loc.statement_index];
|
||||
match statement.kind {
|
||||
StatementKind::Assign(_, Rvalue::Ref(_, _, ref mut place)) => {
|
||||
// Find the underlying local for this (necessarilly interior) borrow.
|
||||
// Find the underlying local for this (necessarily interior) borrow.
|
||||
// HACK(eddyb) using a recursive function because of mutable borrows.
|
||||
fn interior_base<'a, 'tcx>(place: &'a mut Place<'tcx>)
|
||||
-> &'a mut Place<'tcx> {
|
||||
|
|
|
@ -190,7 +190,7 @@ impl MirPass for RestoreSubsliceArrayMoveOut {
|
|||
let local_use = &visitor.locals_use[*local];
|
||||
let opt_index_and_place = Self::try_get_item_source(local_use, mir);
|
||||
// each local should be used twice:
|
||||
// in assign and in aggregate statments
|
||||
// in assign and in aggregate statements
|
||||
if local_use.use_count == 2 && opt_index_and_place.is_some() {
|
||||
let (index, src_place) = opt_index_and_place.unwrap();
|
||||
return Some((local_use, index, src_place));
|
||||
|
@ -231,15 +231,15 @@ impl RestoreSubsliceArrayMoveOut {
|
|||
if opt_size.is_some() && items.iter().all(
|
||||
|l| l.is_some() && l.unwrap().2 == opt_src_place.unwrap()) {
|
||||
|
||||
let indicies: Vec<_> = items.iter().map(|x| x.unwrap().1).collect();
|
||||
for i in 1..indicies.len() {
|
||||
if indicies[i - 1] + 1 != indicies[i] {
|
||||
let indices: Vec<_> = items.iter().map(|x| x.unwrap().1).collect();
|
||||
for i in 1..indices.len() {
|
||||
if indices[i - 1] + 1 != indices[i] {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
let min = *indicies.first().unwrap();
|
||||
let max = *indicies.last().unwrap();
|
||||
let min = *indices.first().unwrap();
|
||||
let max = *indices.last().unwrap();
|
||||
|
||||
for item in items {
|
||||
let locals_use = item.unwrap().0;
|
||||
|
|
|
@ -459,7 +459,7 @@ fn write_scope_tree(
|
|||
let indent = depth * INDENT.len();
|
||||
|
||||
let children = match scope_tree.get(&parent) {
|
||||
Some(childs) => childs,
|
||||
Some(children) => children,
|
||||
None => return Ok(()),
|
||||
};
|
||||
|
||||
|
|
|
@ -201,7 +201,7 @@ fn resolve_struct_error<'sess, 'a>(resolver: &'sess Resolver,
|
|||
if let Some(impl_span) = maybe_impl_defid.map_or(None,
|
||||
|def_id| resolver.definitions.opt_span(def_id)) {
|
||||
err.span_label(reduce_impl_span_to_impl_keyword(cm, impl_span),
|
||||
"`Self` type implicitely declared here, on the `impl`");
|
||||
"`Self` type implicitly declared here, on the `impl`");
|
||||
}
|
||||
},
|
||||
Def::TyParam(typaram_defid) => {
|
||||
|
|
|
@ -81,7 +81,7 @@ fn dropck_outlives<'tcx>(
|
|||
// into the types of its fields `(B, Vec<A>)`. These will get
|
||||
// pushed onto the stack. Eventually, expanding `Vec<A>` will
|
||||
// lead to us trying to push `A` a second time -- to prevent
|
||||
// infinite recusion, we notice that `A` was already pushed
|
||||
// infinite recursion, we notice that `A` was already pushed
|
||||
// once and stop.
|
||||
let mut ty_stack = vec![(for_ty, 0)];
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ pub fn resolve_interior<'a, 'gcx, 'tcx>(fcx: &'a FnCtxt<'a, 'gcx, 'tcx>,
|
|||
// Replace all regions inside the generator interior with late bound regions
|
||||
// Note that each region slot in the types gets a new fresh late bound region,
|
||||
// which means that none of the regions inside relate to any other, even if
|
||||
// typeck had previously found contraints that would cause them to be related.
|
||||
// typeck had previously found constraints that would cause them to be related.
|
||||
let mut counter = 0;
|
||||
let type_list = fcx.tcx.fold_regions(&type_list, &mut false, |_, current_depth| {
|
||||
counter += 1;
|
||||
|
|
|
@ -879,7 +879,7 @@ fn typeck_tables_of<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
// backwards compatibility. This makes fallback a stronger type hint than a cast coercion.
|
||||
fcx.check_casts();
|
||||
|
||||
// Closure and generater analysis may run after fallback
|
||||
// Closure and generator analysis may run after fallback
|
||||
// because they don't constrain other type variables.
|
||||
fcx.closure_analyze(body);
|
||||
assert!(fcx.deferred_call_resolutions.borrow().is_empty());
|
||||
|
@ -2332,7 +2332,7 @@ impl<'a, 'gcx, 'tcx> FnCtxt<'a, 'gcx, 'tcx> {
|
|||
// unconstrained floats with f64.
|
||||
// Fallback becomes very dubious if we have encountered type-checking errors.
|
||||
// In that case, fallback to TyError.
|
||||
// The return value indicates whether fallback has occured.
|
||||
// The return value indicates whether fallback has occurred.
|
||||
fn fallback_if_possible(&self, ty: Ty<'tcx>) -> bool {
|
||||
use rustc::ty::error::UnconstrainedNumeric::Neither;
|
||||
use rustc::ty::error::UnconstrainedNumeric::{UnconstrainedInt, UnconstrainedFloat};
|
||||
|
|
|
@ -1284,7 +1284,7 @@ impl<'a, 'gcx, 'tcx> RegionCtxt<'a, 'gcx, 'tcx> {
|
|||
// how all the types get adjusted.)
|
||||
match ref_kind {
|
||||
ty::ImmBorrow => {
|
||||
// The reference being reborrowed is a sharable ref of
|
||||
// The reference being reborrowed is a shareable ref of
|
||||
// type `&'a T`. In this case, it doesn't matter where we
|
||||
// *found* the `&T` pointer, the memory it references will
|
||||
// be valid and immutable for `'a`. So we can stop here.
|
||||
|
|
|
@ -516,7 +516,7 @@ impl<'cx, 'gcx, 'tcx> WritebackCx<'cx, 'gcx, 'tcx> {
|
|||
}
|
||||
|
||||
fn visit_node_id(&mut self, span: Span, hir_id: hir::HirId) {
|
||||
// Export associated path extensions and method resultions.
|
||||
// Export associated path extensions and method resolutions.
|
||||
if let Some(def) = self.fcx
|
||||
.tables
|
||||
.borrow_mut()
|
||||
|
|
|
@ -152,7 +152,7 @@ fn enforce_impl_params_are_constrained<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
|
|||
// }
|
||||
// ```
|
||||
//
|
||||
// In a concession to backwards compatbility, we continue to
|
||||
// In a concession to backwards compatibility, we continue to
|
||||
// permit those, so long as the lifetimes aren't used in
|
||||
// associated types. I believe this is sound, because lifetimes
|
||||
// used elsewhere are not projected back out.
|
||||
|
|
|
@ -824,7 +824,7 @@ impl<'a, 'tcx, 'rcx, 'cstore> AutoTraitFinder<'a, 'tcx, 'rcx, 'cstore> {
|
|||
// In fact, the iteration of an FxHashMap can even vary between platforms,
|
||||
// since FxHasher has different behavior for 32-bit and 64-bit platforms.
|
||||
//
|
||||
// Obviously, it's extremely undesireable for documentation rendering
|
||||
// Obviously, it's extremely undesirable for documentation rendering
|
||||
// to be depndent on the platform it's run on. Apart from being confusing
|
||||
// to end users, it makes writing tests much more difficult, as predicates
|
||||
// can appear in any order in the final result.
|
||||
|
@ -836,7 +836,7 @@ impl<'a, 'tcx, 'rcx, 'cstore> AutoTraitFinder<'a, 'tcx, 'rcx, 'cstore> {
|
|||
// predicates and bounds, however, we ensure that for a given codebase, all
|
||||
// auto-trait impls always render in exactly the same way.
|
||||
//
|
||||
// Using the Debug impementation for sorting prevents us from needing to
|
||||
// Using the Debug implementation for sorting prevents us from needing to
|
||||
// write quite a bit of almost entirely useless code (e.g. how should two
|
||||
// Types be sorted relative to each other). It also allows us to solve the
|
||||
// problem for both WherePredicates and GenericBounds at the same time. This
|
||||
|
|
|
@ -31,7 +31,7 @@ pub enum Cfg {
|
|||
True,
|
||||
/// Denies all configurations.
|
||||
False,
|
||||
/// A generic configration option, e.g. `test` or `target_os = "linux"`.
|
||||
/// A generic configuration option, e.g. `test` or `target_os = "linux"`.
|
||||
Cfg(Symbol, Option<Symbol>),
|
||||
/// Negate a configuration requirement, i.e. `not(x)`.
|
||||
Not(Box<Cfg>),
|
||||
|
|
|
@ -315,7 +315,7 @@ pub struct Cache {
|
|||
// the access levels from crateanalysis.
|
||||
pub access_levels: Arc<AccessLevels<DefId>>,
|
||||
|
||||
/// The version of the crate being documented, if given fron the `--crate-version` flag.
|
||||
/// The version of the crate being documented, if given from the `--crate-version` flag.
|
||||
pub crate_version: Option<String>,
|
||||
|
||||
// Private fields only used when initially crawling a crate to build a cache
|
||||
|
|
|
@ -52,6 +52,8 @@
|
|||
|
||||
var themesWidth = null;
|
||||
|
||||
var titleBeforeSearch = document.title;
|
||||
|
||||
if (!String.prototype.startsWith) {
|
||||
String.prototype.startsWith = function(searchString, position) {
|
||||
position = position || 0;
|
||||
|
@ -267,6 +269,7 @@
|
|||
ev.preventDefault();
|
||||
addClass(search, "hidden");
|
||||
removeClass(document.getElementById("main"), "hidden");
|
||||
document.title = titleBeforeSearch;
|
||||
}
|
||||
defocusSearchBar();
|
||||
}
|
||||
|
|
|
@ -17,9 +17,7 @@ use std::collections::{LinkedList, VecDeque, BTreeMap, BTreeSet, HashMap, HashSe
|
|||
use std::rc::Rc;
|
||||
use std::sync::Arc;
|
||||
|
||||
impl<
|
||||
T: Encodable
|
||||
> Encodable for LinkedList<T> {
|
||||
impl<T: Encodable> Encodable for LinkedList<T> {
|
||||
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
|
||||
s.emit_seq(self.len(), |s| {
|
||||
for (i, e) in self.iter().enumerate() {
|
||||
|
@ -65,10 +63,10 @@ impl<T:Decodable> Decodable for VecDeque<T> {
|
|||
}
|
||||
}
|
||||
|
||||
impl<
|
||||
K: Encodable + PartialEq + Ord,
|
||||
V: Encodable
|
||||
> Encodable for BTreeMap<K, V> {
|
||||
impl<K, V> Encodable for BTreeMap<K, V>
|
||||
where K: Encodable + PartialEq + Ord,
|
||||
V: Encodable
|
||||
{
|
||||
fn encode<S: Encoder>(&self, e: &mut S) -> Result<(), S::Error> {
|
||||
e.emit_map(self.len(), |e| {
|
||||
let mut i = 0;
|
||||
|
@ -82,10 +80,10 @@ impl<
|
|||
}
|
||||
}
|
||||
|
||||
impl<
|
||||
K: Decodable + PartialEq + Ord,
|
||||
V: Decodable
|
||||
> Decodable for BTreeMap<K, V> {
|
||||
impl<K, V> Decodable for BTreeMap<K, V>
|
||||
where K: Decodable + PartialEq + Ord,
|
||||
V: Decodable
|
||||
{
|
||||
fn decode<D: Decoder>(d: &mut D) -> Result<BTreeMap<K, V>, D::Error> {
|
||||
d.read_map(|d, len| {
|
||||
let mut map = BTreeMap::new();
|
||||
|
@ -99,9 +97,9 @@ impl<
|
|||
}
|
||||
}
|
||||
|
||||
impl<
|
||||
T: Encodable + PartialEq + Ord
|
||||
> Encodable for BTreeSet<T> {
|
||||
impl<T> Encodable for BTreeSet<T>
|
||||
where T: Encodable + PartialEq + Ord
|
||||
{
|
||||
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
|
||||
s.emit_seq(self.len(), |s| {
|
||||
let mut i = 0;
|
||||
|
@ -114,9 +112,9 @@ impl<
|
|||
}
|
||||
}
|
||||
|
||||
impl<
|
||||
T: Decodable + PartialEq + Ord
|
||||
> Decodable for BTreeSet<T> {
|
||||
impl<T> Decodable for BTreeSet<T>
|
||||
where T: Decodable + PartialEq + Ord
|
||||
{
|
||||
fn decode<D: Decoder>(d: &mut D) -> Result<BTreeSet<T>, D::Error> {
|
||||
d.read_seq(|d, len| {
|
||||
let mut set = BTreeSet::new();
|
||||
|
|
|
@ -118,6 +118,7 @@ pub fn write_signed_leb128_to<W>(mut value: i128, mut write: W)
|
|||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn write_signed_leb128(out: &mut Vec<u8>, value: i128) {
|
||||
write_signed_leb128_to(value, |v| write_to_vec(out, v))
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ impl Encoder {
|
|||
self.data
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn emit_raw_bytes(&mut self, s: &[u8]) {
|
||||
self.data.extend_from_slice(s);
|
||||
}
|
||||
|
@ -193,6 +194,7 @@ impl<'a> Decoder<'a> {
|
|||
self.position += bytes;
|
||||
}
|
||||
|
||||
#[inline]
|
||||
pub fn read_raw_bytes(&mut self, s: &mut [u8]) -> Result<(), String> {
|
||||
let start = self.position;
|
||||
let end = start + s.len();
|
||||
|
@ -326,6 +328,7 @@ impl<'a> serialize::Decoder for Decoder<'a> {
|
|||
Ok(Cow::Borrowed(s))
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn error(&mut self, err: &str) -> Self::Error {
|
||||
err.to_string()
|
||||
}
|
||||
|
|
|
@ -119,6 +119,7 @@ pub trait Encoder {
|
|||
self.emit_enum("Option", f)
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn emit_option_none(&mut self) -> Result<(), Self::Error> {
|
||||
self.emit_enum_variant("None", 0, 0, |_| Ok(()))
|
||||
}
|
||||
|
@ -560,14 +561,12 @@ impl< T: Decodable> Decodable for Box<[T]> {
|
|||
}
|
||||
|
||||
impl<T:Encodable> Encodable for Rc<T> {
|
||||
#[inline]
|
||||
fn encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {
|
||||
(**self).encode(s)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T:Decodable> Decodable for Rc<T> {
|
||||
#[inline]
|
||||
fn decode<D: Decoder>(d: &mut D) -> Result<Rc<T>, D::Error> {
|
||||
Ok(Rc::new(Decodable::decode(d)?))
|
||||
}
|
||||
|
@ -618,7 +617,9 @@ impl<'a, T:Encodable> Encodable for Cow<'a, [T]> where [T]: ToOwned<Owned = Vec<
|
|||
}
|
||||
}
|
||||
|
||||
impl<T:Decodable+ToOwned> Decodable for Cow<'static, [T]> where [T]: ToOwned<Owned = Vec<T>> {
|
||||
impl<T:Decodable+ToOwned> Decodable for Cow<'static, [T]>
|
||||
where [T]: ToOwned<Owned = Vec<T>>
|
||||
{
|
||||
fn decode<D: Decoder>(d: &mut D) -> Result<Cow<'static, [T]>, D::Error> {
|
||||
d.read_seq(|d, len| {
|
||||
let mut v = Vec::with_capacity(len);
|
||||
|
|
|
@ -234,10 +234,10 @@ fn can_alias_safehash_as_hash() {
|
|||
// make a RawBucket point to invalid memory using safe code.
|
||||
impl<K, V> RawBucket<K, V> {
|
||||
unsafe fn hash(&self) -> *mut HashUint {
|
||||
self.hash_start.offset(self.idx as isize)
|
||||
self.hash_start.add(self.idx)
|
||||
}
|
||||
unsafe fn pair(&self) -> *mut (K, V) {
|
||||
self.pair_start.offset(self.idx as isize) as *mut (K, V)
|
||||
self.pair_start.add(self.idx) as *mut (K, V)
|
||||
}
|
||||
unsafe fn hash_pair(&self) -> (*mut HashUint, *mut (K, V)) {
|
||||
(self.hash(), self.pair())
|
||||
|
|
|
@ -88,7 +88,7 @@ where
|
|||
/// This function acquires exclusive access to the task context.
|
||||
///
|
||||
/// Panics if no task has been set or if the task context has already been
|
||||
/// retrived by a surrounding call to get_task_cx.
|
||||
/// retrieved by a surrounding call to get_task_cx.
|
||||
pub fn get_task_cx<F, R>(f: F) -> R
|
||||
where
|
||||
F: FnOnce(&mut task::Context) -> R
|
||||
|
|
|
@ -889,7 +889,7 @@ impl<W: Write> Write for LineWriter<W> {
|
|||
|
||||
// Find the last newline character in the buffer provided. If found then
|
||||
// we're going to write all the data up to that point and then flush,
|
||||
// otherewise we just write the whole block to the underlying writer.
|
||||
// otherwise we just write the whole block to the underlying writer.
|
||||
let i = match memchr::memrchr(b'\n', buf) {
|
||||
Some(i) => i,
|
||||
None => return self.inner.write(buf),
|
||||
|
|
|
@ -270,7 +270,7 @@
|
|||
#![feature(libc)]
|
||||
#![feature(link_args)]
|
||||
#![feature(linkage)]
|
||||
#![feature(macro_vis_matcher)]
|
||||
#![cfg_attr(stage0, feature(macro_vis_matcher))]
|
||||
#![feature(needs_panic_runtime)]
|
||||
#![feature(never_type)]
|
||||
#![cfg_attr(not(stage0), feature(nll))]
|
||||
|
|
|
@ -57,7 +57,7 @@ pub fn memrchr(needle: u8, haystack: &[u8]) -> Option<usize> {
|
|||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
// test the implementations for the current plattform
|
||||
// test the implementations for the current platform
|
||||
use super::{memchr, memrchr};
|
||||
|
||||
#[test]
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue