[med-svn] [r-cran-rlang] 01/02: New upstream version 0.1.2

Andreas Tille tille at debian.org
Fri Sep 29 21:59:38 UTC 2017


This is an automated email from the git hooks/post-receive script.

tille pushed a commit to branch master
in repository r-cran-rlang.

commit 6f1bcc1a4db22d9c9dfbd34dd3d059deeca50a15
Author: Andreas Tille <tille at debian.org>
Date:   Fri Sep 29 23:56:09 2017 +0200

    New upstream version 0.1.2
---
 DESCRIPTION                          |   27 +
 MD5                                  |  220 +++++++
 NAMESPACE                            |  371 ++++++++++++
 NEWS.md                              |   61 ++
 R/arg.R                              |  155 +++++
 R/attr.R                             |  282 +++++++++
 R/cnd-handlers.R                     |  235 ++++++++
 R/cnd-restarts.R                     |  283 +++++++++
 R/cnd.R                              |  293 ++++++++++
 R/compat-lazyeval.R                  |   90 +++
 R/compat-oldrel.R                    |   41 ++
 R/compat-purrr.R                     |  162 ++++++
 R/dictionary.R                       |  168 ++++++
 R/dots.R                             |  260 +++++++++
 R/env.R                              | 1053 ++++++++++++++++++++++++++++++++++
 R/eval-tidy.R                        |  350 +++++++++++
 R/eval.R                             |  205 +++++++
 R/expr-lang.R                        |  554 ++++++++++++++++++
 R/expr-node.R                        |  282 +++++++++
 R/expr-sym.R                         |   39 ++
 R/expr.R                             |  346 +++++++++++
 R/fn.R                               |  412 +++++++++++++
 R/formula.R                          |  198 +++++++
 R/operators.R                        |   90 +++
 R/parse.R                            |   90 +++
 R/quo-unquote.R                      |  185 ++++++
 R/quo.R                              |  493 ++++++++++++++++
 R/quos.R                             |  133 +++++
 R/rlang.R                            |    2 +
 R/stack.R                            |  681 ++++++++++++++++++++++
 R/types.R                            |  685 ++++++++++++++++++++++
 R/utils.R                            |   99 ++++
 R/vector-chr.R                       |  256 +++++++++
 R/vector-coercion.R                  |  227 ++++++++
 R/vector-ctor.R                      |  251 ++++++++
 R/vector-missing.R                   |  122 ++++
 R/vector-raw.R                       |   26 +
 R/vector-squash.R                    |  186 ++++++
 R/vector-utils.R                     |   93 +++
 README.md                            |   49 ++
 build/vignette.rds                   |  Bin 0 -> 209 bytes
 inst/doc/tidy-evaluation.R           |   28 +
 inst/doc/tidy-evaluation.Rmd         |  263 +++++++++
 inst/doc/tidy-evaluation.html        |  137 +++++
 man/abort.Rd                         |   50 ++
 man/are_na.Rd                        |   58 ++
 man/arg_match.Rd                     |   34 ++
 man/as_bytes.Rd                      |   18 +
 man/as_env.Rd                        |   42 ++
 man/as_function.Rd                   |   46 ++
 man/as_overscope.Rd                  |  121 ++++
 man/as_pairlist.Rd                   |   18 +
 man/as_quosure.Rd                    |   64 +++
 man/as_utf8_character.Rd             |   55 ++
 man/bare-type-predicates.Rd          |   65 +++
 man/call_inspect.Rd                  |   22 +
 man/caller_env.Rd                    |   27 +
 man/cnd_signal.Rd                    |  134 +++++
 man/dictionary.Rd                    |   40 ++
 man/dots_list.Rd                     |   67 +++
 man/dots_n.Rd                        |   19 +
 man/dots_values.Rd                   |   30 +
 man/duplicate.Rd                     |   34 ++
 man/empty_env.Rd                     |   17 +
 man/env.Rd                           |  147 +++++
 man/env_bind.Rd                      |  156 +++++
 man/env_bury.Rd                      |   46 ++
 man/env_clone.Rd                     |   24 +
 man/env_depth.Rd                     |   28 +
 man/env_get.Rd                       |   34 ++
 man/env_has.Rd                       |   35 ++
 man/env_inherits.Rd                  |   17 +
 man/env_names.Rd                     |   52 ++
 man/env_parent.Rd                    |   56 ++
 man/env_unbind.Rd                    |   43 ++
 man/eval_bare.Rd                     |   77 +++
 man/eval_tidy.Rd                     |   99 ++++
 man/eval_tidy_.Rd                    |   43 ++
 man/exiting.Rd                       |   63 ++
 man/expr.Rd                          |   83 +++
 man/expr_interp.Rd                   |   53 ++
 man/expr_label.Rd                    |   47 ++
 man/exprs_auto_name.Rd               |   29 +
 man/f_rhs.Rd                         |   49 ++
 man/f_text.Rd                        |   39 ++
 man/flatten.Rd                       |  112 ++++
 man/fn_env.Rd                        |   36 ++
 man/fn_fmls.Rd                       |   44 ++
 man/frame_position.Rd                |   49 ++
 man/friendly_type.Rd                 |   23 +
 man/get_env.Rd                       |   84 +++
 man/has_length.Rd                    |   28 +
 man/has_name.Rd                      |   28 +
 man/invoke.Rd                        |   62 ++
 man/is_callable.Rd                   |   43 ++
 man/is_condition.Rd                  |   14 +
 man/is_copyable.Rd                   |   34 ++
 man/is_empty.Rd                      |   19 +
 man/is_env.Rd                        |   18 +
 man/is_expr.Rd                       |  115 ++++
 man/is_formula.Rd                    |   68 +++
 man/is_frame.Rd                      |   14 +
 man/is_function.Rd                   |  109 ++++
 man/is_installed.Rd                  |   22 +
 man/is_integerish.Rd                 |   36 ++
 man/is_lang.Rd                       |   80 +++
 man/is_named.Rd                      |   68 +++
 man/is_pairlist.Rd                   |   29 +
 man/is_quosure.Rd                    |   44 ++
 man/is_stack.Rd                      |   20 +
 man/is_symbol.Rd                     |   14 +
 man/is_true.Rd                       |   25 +
 man/lang.Rd                          |   97 ++++
 man/lang_args.Rd                     |   44 ++
 man/lang_fn.Rd                       |   30 +
 man/lang_head.Rd                     |   37 ++
 man/lang_modify.Rd                   |   62 ++
 man/lang_name.Rd                     |   40 ++
 man/lang_standardise.Rd              |   20 +
 man/missing.Rd                       |   62 ++
 man/missing_arg.Rd                   |   68 +++
 man/modify.Rd                        |   30 +
 man/mut_utf8_locale.Rd               |   46 ++
 man/names2.Rd                        |   24 +
 man/new_cnd.Rd                       |   64 +++
 man/new_formula.Rd                   |   26 +
 man/new_function.Rd                  |   40 ++
 man/ns_env.Rd                        |   29 +
 man/op-definition.Rd                 |   45 ++
 man/op-get-attr.Rd                   |   21 +
 man/op-na-default.Rd                 |   23 +
 man/op-null-default.Rd               |   21 +
 man/pairlist.Rd                      |  152 +++++
 man/parse_expr.Rd                    |   75 +++
 man/prepend.Rd                       |   31 +
 man/prim_name.Rd                     |   14 +
 man/quasiquotation.Rd                |  112 ++++
 man/quo-predicates.Rd                |   46 ++
 man/quo_expr.Rd                      |   57 ++
 man/quosure.Rd                       |  196 +++++++
 man/quosures.Rd                      |   89 +++
 man/restarting.Rd                    |   69 +++
 man/return_from.Rd                   |   54 ++
 man/rst_abort.Rd                     |   57 ++
 man/rst_list.Rd                      |   35 ++
 man/rst_muffle.Rd                    |   72 +++
 man/scalar-type-predicates.Rd        |   49 ++
 man/scoped_env.Rd                    |   98 ++++
 man/seq2.Rd                          |   36 ++
 man/set_attrs.Rd                     |   52 ++
 man/set_chr_encoding.Rd              |   77 +++
 man/set_expr.Rd                      |   45 ++
 man/set_names.Rd                     |   44 ++
 man/splice.Rd                        |   52 ++
 man/stack.Rd                         |  147 +++++
 man/stack_trim.Rd                    |   51 ++
 man/string.Rd                        |   44 ++
 man/switch_lang.Rd                   |   84 +++
 man/switch_type.Rd                   |   88 +++
 man/sym.Rd                           |   23 +
 man/tidyeval-data.Rd                 |   24 +
 man/type-predicates.Rd               |   66 +++
 man/type_of.Rd                       |   48 ++
 man/vector-along.Rd                  |   53 ++
 man/vector-coercion.Rd               |  137 +++++
 man/vector-construction.Rd           |  109 ++++
 man/vector-len.Rd                    |   47 ++
 man/with_env.Rd                      |   61 ++
 man/with_handlers.Rd                 |   90 +++
 man/with_restarts.Rd                 |  120 ++++
 src/attrs.c                          |   18 +
 src/capture.c                        |  111 ++++
 src/env.c                            |    7 +
 src/eval.c                           |    6 +
 src/export.c                         |   37 ++
 src/export.h                         |   18 +
 src/formula.c                        |   76 +++
 src/formula.h                        |   11 +
 src/init.c                           |  104 ++++
 src/lang.c                           |    6 +
 src/pairlist.c                       |   64 +++
 src/replace-na.c                     |  129 +++++
 src/sexp.c                           |   12 +
 src/splice.c                         |  288 ++++++++++
 src/symbol.c                         |  168 ++++++
 src/unquote.c                        |  180 ++++++
 src/utils.c                          |  218 +++++++
 src/utils.h                          |   40 ++
 src/vector.h                         |  126 ++++
 tests/testthat.R                     |    4 +
 tests/testthat/helper-capture.R      |   10 +
 tests/testthat/helper-locale.R       |   46 ++
 tests/testthat/helper-stack.R        |   22 +
 tests/testthat/test-arg.R            |   37 ++
 tests/testthat/test-attr.R           |   51 ++
 tests/testthat/test-compat.R         |   90 +++
 tests/testthat/test-conditions.R     |  123 ++++
 tests/testthat/test-dictionary.R     |   60 ++
 tests/testthat/test-dots.R           |   61 ++
 tests/testthat/test-encoding.R       |   39 ++
 tests/testthat/test-env.R            |  180 ++++++
 tests/testthat/test-eval.R           |   13 +
 tests/testthat/test-fn.R             |   74 +++
 tests/testthat/test-formula.R        |  151 +++++
 tests/testthat/test-lang-call.R      |  126 ++++
 tests/testthat/test-lang-expr.R      |   49 ++
 tests/testthat/test-lang.R           |   66 +++
 tests/testthat/test-operators.R      |   24 +
 tests/testthat/test-parse.R          |   19 +
 tests/testthat/test-quo-enquo.R      |   36 ++
 tests/testthat/test-quosure.R        |   35 ++
 tests/testthat/test-stack.R          |  320 +++++++++++
 tests/testthat/test-tidy-capture.R   |  208 +++++++
 tests/testthat/test-tidy-eval.R      |  217 +++++++
 tests/testthat/test-tidy-unquote.R   |  180 ++++++
 tests/testthat/test-types-coercion.R |   50 ++
 tests/testthat/test-types.R          |   59 ++
 tests/testthat/test-utils.R          |   12 +
 tests/testthat/test-vector.R         |  187 ++++++
 vignettes/releases/rlang-0.1.Rmd     |  202 +++++++
 vignettes/tidy-evaluation.Rmd        |  263 +++++++++
 221 files changed, 21768 insertions(+)

diff --git a/DESCRIPTION b/DESCRIPTION
new file mode 100644
index 0000000..9eb83c2
--- /dev/null
+++ b/DESCRIPTION
@@ -0,0 +1,27 @@
+Package: rlang
+Version: 0.1.2
+Title: Functions for Base Types and Core R and 'Tidyverse' Features
+Description: A toolbox for working with base types, core R features
+  like the condition system, and core 'Tidyverse' features like tidy
+  evaluation.
+Authors at R: c(
+    person("Lionel", "Henry", ,"lionel at rstudio.com", c("aut", "cre")),
+    person("Hadley", "Wickham", ,"hadley at rstudio.com", "aut"),
+    person("RStudio", role = "cph")
+    )
+License: GPL-3
+LazyData: true
+Depends: R (>= 3.1.0)
+Suggests: knitr, rmarkdown (>= 0.2.65), testthat, covr
+VignetteBuilder: knitr
+RoxygenNote: 6.0.1
+URL: http://rlang.tidyverse.org, https://github.com/tidyverse/rlang
+BugReports: https://github.com/tidyverse/rlang/issues
+NeedsCompilation: yes
+Packaged: 2017-08-09 16:20:26 UTC; lionel
+Author: Lionel Henry [aut, cre],
+  Hadley Wickham [aut],
+  RStudio [cph]
+Maintainer: Lionel Henry <lionel at rstudio.com>
+Repository: CRAN
+Date/Publication: 2017-08-09 20:37:03 UTC
diff --git a/MD5 b/MD5
new file mode 100644
index 0000000..061ba00
--- /dev/null
+++ b/MD5
@@ -0,0 +1,220 @@
+25eea7a62ac382818e2404acaa118a06 *DESCRIPTION
+65e8be68e9c270cee089874f0a91d54b *NAMESPACE
+3dc858b904ccd682a399764432254f21 *NEWS.md
+5850c9a2547cc102760f6e7b31a0ab44 *R/arg.R
+b87c840684fbca7a060d0f26c50de634 *R/attr.R
+ebe43cd5909495eda412096fe8504082 *R/cnd-handlers.R
+ee99999103dcb26c91daa235756e00bb *R/cnd-restarts.R
+2594bdb3b254f726bf0738dccd37c63c *R/cnd.R
+b5bb1035d693ae648f96b45fdafbbbe4 *R/compat-lazyeval.R
+552b6c37e0f78fa99b9233101a2c59f7 *R/compat-oldrel.R
+feaa904f11d6970ddbc1057a30cf6610 *R/compat-purrr.R
+1a08a8fbda2f47103500c2bf75e16b9a *R/dictionary.R
+5e40bec0fe1de4cdbee2e69b63e71edf *R/dots.R
+4994de628a91c8c0666ce64b9189fa63 *R/env.R
+a274f5a30b044f570f92afa22103ab61 *R/eval-tidy.R
+c75b93394da034fba516f3e4de376d73 *R/eval.R
+388c0182395c9097dec25488f238a9f0 *R/expr-lang.R
+940d8b698ba48d98bb2d63eed74e63cc *R/expr-node.R
+9bcaac4bf3d9b4c877d80c2052e99aea *R/expr-sym.R
+58249a27935c8f6f4593fd83f638aa01 *R/expr.R
+0de33671a5b1cc5759e35ea89a9d180a *R/fn.R
+cce30a66b79a91779835e0a578d87b11 *R/formula.R
+68559dce0297d7ac39e92e9641dbc7dc *R/operators.R
+65ae6b11e3e1abb6fc48bc5197a14dfc *R/parse.R
+ba2c2f6a3ac5d35ce1d11fcb58792f38 *R/quo-unquote.R
+a4bb7debffb7d55af28f8dc93285eb63 *R/quo.R
+cffe6f4479320f0640aa3c2f970acf9c *R/quos.R
+83f1c70b36f26f7ddf9c0e0fa1290e82 *R/rlang.R
+2583e58030013fee46205deae2f071d7 *R/stack.R
+174f851dc9dd721543b129e6804240a3 *R/types.R
+237bacef41cc547b0256727998170196 *R/utils.R
+6d65b52cb4ae292f761f68b0b4bc075b *R/vector-chr.R
+c783ded521984be52b454008b052d363 *R/vector-coercion.R
+fc209d4253bdd65575665072492f8034 *R/vector-ctor.R
+92095e31792f829a142d5e00d81290f7 *R/vector-missing.R
+bd7a7c1849b9e4fc6bd0a74d90600544 *R/vector-raw.R
+ba84d9a9590dcd2c00955efc33412060 *R/vector-squash.R
+b0806a2d025031a83d77602c2c10962d *R/vector-utils.R
+a30526960e3890f9f1005faf0a0461a5 *README.md
+434467c5842f5c005d33976fca693fc1 *build/vignette.rds
+21c7dbbbe94c04ebc0079d1a6354a6c7 *inst/doc/tidy-evaluation.R
+2feb5d2bc3cdd936bedfc6fee3dbd6b3 *inst/doc/tidy-evaluation.Rmd
+901194adf5fa47401dd376c0d9991194 *inst/doc/tidy-evaluation.html
+133aeff5ff65188820b027e25029d469 *man/abort.Rd
+30412215df991bc15b5db13d6264b947 *man/are_na.Rd
+5691903ed6051a10cb885b822c0581dd *man/arg_match.Rd
+ebb8a63478c4f2f9dc185af512ccb697 *man/as_bytes.Rd
+612990b9b4ef107d09fb52a59d6aefbf *man/as_env.Rd
+1185d7838c1f66bb706ea43033f60d21 *man/as_function.Rd
+8cc1946a9726adeb7881d471fc3bce8f *man/as_overscope.Rd
+40f85c91b931931cbddbc0c8969d9b5a *man/as_pairlist.Rd
+7ea71a132a301fa8bb64a0fe565196bf *man/as_quosure.Rd
+0cdb3fa9235915d6fd5296f4d4198ad9 *man/as_utf8_character.Rd
+4e296349bd26886f15acbdc7bb529440 *man/bare-type-predicates.Rd
+c7cfdb4ad9aa7a51c353929a6200b7b3 *man/call_inspect.Rd
+acbeef661c054e58a0b68e7f3f607ee3 *man/caller_env.Rd
+d82f73bfa88e76a6be47a66d3da3b2b3 *man/cnd_signal.Rd
+3d8927bb436893d2971594293723a3b5 *man/dictionary.Rd
+ad26ea30b7bb0317b53e691c75a78718 *man/dots_list.Rd
+ab9a67f908943873763d03995ee59413 *man/dots_n.Rd
+83325b184e92828c29c253b373c79fba *man/dots_values.Rd
+2e214f2ddb1622ec924c8886685c18ca *man/duplicate.Rd
+4701438fbf7da409d2b907d0bc44686d *man/empty_env.Rd
+c439eca44cb399952d75a1f2c0e48060 *man/env.Rd
+620f033bd8a678f2e7d97497356cbc96 *man/env_bind.Rd
+7918f52da52c0fc84cb210b3366575bc *man/env_bury.Rd
+5c16008a4a5c52883b4ceb3087413014 *man/env_clone.Rd
+aba08942ec921d3f54cee6fa6b78b16a *man/env_depth.Rd
+361088c9c05e7bc9e30f18eb20955235 *man/env_get.Rd
+6b83a29686ff505c418df6252e2c5e51 *man/env_has.Rd
+0b71913e2459011017c3ae52d003a616 *man/env_inherits.Rd
+96e9379f512e5d9dd8269d6f72b88b84 *man/env_names.Rd
+7a4a9f46715236388c0a941fbe1cf3a0 *man/env_parent.Rd
+5c49bce36c025b77e6cc77b63aef9f7d *man/env_unbind.Rd
+875147eec9357123fc62368ef8e3aa0a *man/eval_bare.Rd
+d2a30b76487255ccdb73215b6694b87d *man/eval_tidy.Rd
+b56a283b539e04a0af38cbd238b7c584 *man/eval_tidy_.Rd
+bda0baef7dcab89e2bb80d7e0187d2b8 *man/exiting.Rd
+2f0760075737bb63d923dd4fe9a47eae *man/expr.Rd
+7761494dfd941105a3cc74c9c58e72cd *man/expr_interp.Rd
+e0c3f575d840093dd1a6ae4d90de89d8 *man/expr_label.Rd
+fd3ef6669a1911f88cfa499360bcd1e0 *man/exprs_auto_name.Rd
+f9fe2913a3200a2db8c3695e21b73dff *man/f_rhs.Rd
+fde75000d75782933733dfc1b3814540 *man/f_text.Rd
+bd280db5113614a94f04f147bfc79d33 *man/flatten.Rd
+fb6a9867bff132541ea3830d5707ad29 *man/fn_env.Rd
+26c48526f3aa89c4e4668b01a2afe5e0 *man/fn_fmls.Rd
+3e209927032d121ae514e43efa9b1bf3 *man/frame_position.Rd
+453b50458f0edfd7be7ec19393820398 *man/friendly_type.Rd
+47f585c20c1ef18a5008de584e10cba8 *man/get_env.Rd
+cd4eabd927b4d7f6c9d9a26338093656 *man/has_length.Rd
+17cb5c536e230ed5484640deddf632d9 *man/has_name.Rd
+1115d379210fc28faf2abaf5a3676620 *man/invoke.Rd
+bf0d5fcc12dcaa94fb82af49f8c72bc6 *man/is_callable.Rd
+409300831dc925b98eb2e63c8c26060e *man/is_condition.Rd
+91d26b703dec47f81afcb896132d0f8f *man/is_copyable.Rd
+ab65d6e0c9ab4b2914627ef58f4995ee *man/is_empty.Rd
+079abf9dab58e5170875c4a6c4d1cf48 *man/is_env.Rd
+2151f950540022d03f877b3ec4dce5c9 *man/is_expr.Rd
+b81e303c701370addd8ead4c3f4d6d1b *man/is_formula.Rd
+79037bf9e4be01f5fcf30dd21e237b3d *man/is_frame.Rd
+06eae5eed20273762f208bd88a323c88 *man/is_function.Rd
+528d325f118ed48392cc17c89f489eef *man/is_installed.Rd
+84c5da61926aea6786229dfa210815c8 *man/is_integerish.Rd
+c4c6a39ed0cc0d45d0ffb726ec1c39c1 *man/is_lang.Rd
+b4888543fa7d4d715edd043863b447c5 *man/is_named.Rd
+31d0b8b7e3eeba05f2f9e7a48c28113e *man/is_pairlist.Rd
+08a9249075af51e04eb4add9a8eb545b *man/is_quosure.Rd
+ef22617e10a178f3937ce38a1d574440 *man/is_stack.Rd
+aeef320a784189e1144fca2d7067797a *man/is_symbol.Rd
+c524dae3c587048154fff2fbded9f7b9 *man/is_true.Rd
+344f0500fd983c650caf6e24bb4e5c2e *man/lang.Rd
+b526bc7f15afd26fbffd2de8edb1f9b1 *man/lang_args.Rd
+0b388f31a29a0d0ac70b23067d438826 *man/lang_fn.Rd
+2525df88e074a8547cf1b670b9d3de6a *man/lang_head.Rd
+253abf95cb5556f755e8c568b46fc9eb *man/lang_modify.Rd
+ccf9f28cb463bb911377c3bd21f7c1c2 *man/lang_name.Rd
+3032e207871bc0957bddaef04e954bdf *man/lang_standardise.Rd
+1f235b21de0b8c28c810868bdb27cf76 *man/missing.Rd
+3baecd69a7bc63316aaf9a3fb4eacde3 *man/missing_arg.Rd
+45adf0b1c1d508b46d44355507625793 *man/modify.Rd
+93a4c240dedca4de14412160aa45f6f0 *man/mut_utf8_locale.Rd
+cc8ec25a9b3a0bdc933faee5ff911657 *man/names2.Rd
+0106e04a7574185c2fda2ecd9f385654 *man/new_cnd.Rd
+2ad8783d26be37103f821e5ae535c56b *man/new_formula.Rd
+6dbada34e8a919e4cf458edb2835e788 *man/new_function.Rd
+05fd79c496a929917a7e28508bdacf5f *man/ns_env.Rd
+5d55640855962a00cdd1fe48a2d3d98d *man/op-definition.Rd
+25968838703a8a2fa53e74a7c65ba972 *man/op-get-attr.Rd
+375233489e5c5afe90c40ea4c78dda33 *man/op-na-default.Rd
+104a6a1976e8c95b00c7f5f3457ba6d7 *man/op-null-default.Rd
+4768bb877b9ca24d8382bea3a825b3f6 *man/pairlist.Rd
+fc9ef6b7e8ddbdc56cc166842dca1acb *man/parse_expr.Rd
+3989d677759cf37d333fa4ab1f5eac0f *man/prepend.Rd
+90d2aca5c08c6285fa1eb36f4445d685 *man/prim_name.Rd
+137b4da43fc5ecac13cc83402a6b269d *man/quasiquotation.Rd
+6197cd58b97cb24f4e2f158d742132ab *man/quo-predicates.Rd
+bbb42d01e9d16e6dab76c2e16fa00de2 *man/quo_expr.Rd
+a247c0b8a541e022b8147cfa50df4977 *man/quosure.Rd
+e4a006fe9a9498382ef90834c5264762 *man/quosures.Rd
+cdee2d9c95af6e232eae2991a2fe6492 *man/restarting.Rd
+87d9067c7ddcebed4b77280ea3e2f5f0 *man/return_from.Rd
+98a7585f070be151686b231f308ecc3e *man/rst_abort.Rd
+0d28a4aecf02bdb643788692bcc351b1 *man/rst_list.Rd
+d14da9436c6344cdc75918b955486c7f *man/rst_muffle.Rd
+3ba96b24fec2e0677e8b352599bd37d2 *man/scalar-type-predicates.Rd
+ccb69ad935a7f66167ce1e9e88414874 *man/scoped_env.Rd
+b2d57876e6c3728ed527424542aebde7 *man/seq2.Rd
+1333a2826299cb9abbdb684566864412 *man/set_attrs.Rd
+026db39fd9f4cb660726662420c4b545 *man/set_chr_encoding.Rd
+bec53adc17993f50ff9c4e0aaf3fd905 *man/set_expr.Rd
+acd6a20d5d09073080930d5c76b259af *man/set_names.Rd
+d433ec764bdaecbd0276c4c60611650e *man/splice.Rd
+c9727451a7f85e262b695ae786944f52 *man/stack.Rd
+1384b5e2f829fbdc6e3d89d538f3c5a3 *man/stack_trim.Rd
+c952836e0c6db30ad049c4f211bb6cba *man/string.Rd
+6f8a7395c64e8ae026c3d164b10dc95d *man/switch_lang.Rd
+f95ec0dc293fb98a7ca94cc4e8dba11a *man/switch_type.Rd
+0a578bf3afcdba009b0db655ee5c3006 *man/sym.Rd
+5203eefea6b4ef422d6e1cf82c4e83f0 *man/tidyeval-data.Rd
+805cf52370fa7222620b2185b3b63034 *man/type-predicates.Rd
+abf10d959ea1107ee191ee7f711bda44 *man/type_of.Rd
+00e5b66ca64038097b167f487b8118f6 *man/vector-along.Rd
+fc59656f3e428a760dc0495920c26c9d *man/vector-coercion.Rd
+30622f02db5ac1da3bdfc0cfdadd92ee *man/vector-construction.Rd
+9290f69f9fe73382958c28b12884c57c *man/vector-len.Rd
+92b8a8c694ce8e878220ac22d5bfc210 *man/with_env.Rd
+ec96a4d180e97c582d48344245055fef *man/with_handlers.Rd
+6b11d3538b23bf99cd534c7c37d40a70 *man/with_restarts.Rd
+01b67f9a87ec275638ea7fda507a5bd2 *src/attrs.c
+a93282970e52e3588f17cdd94d708f4d *src/capture.c
+e70f730597da4fae90ec0d701207a200 *src/env.c
+3bb13c900498e3bc73f021b48e423fac *src/eval.c
+7180953c095a5d580b43cc8e578d8362 *src/export.c
+a38b760fd28821b93e4784c32ae1f63b *src/export.h
+5a4831941c2148102ff05a5fd4b7d264 *src/formula.c
+513c2b96983ca4b454e046861589ffa8 *src/formula.h
+31836fbc4cba415f6da6e042d89093da *src/init.c
+ae15801957c56233dfc9f4351fc2b372 *src/lang.c
+b57df213d7db4146c41dece2ce716c0c *src/pairlist.c
+b69fa43716f95ae3fb4c6c359a45e4bf *src/replace-na.c
+a347f2ebd39fa6de52d69919d2bb071f *src/sexp.c
+dd2aa426091ee2e615a749985871df59 *src/splice.c
+eef080034ed3e85f5bd91674698d34aa *src/symbol.c
+7415abdd0bbee28ab86181e349a67a08 *src/unquote.c
+343e916983e85ff27f3330282545f3e8 *src/utils.c
+2a8718fa75bc647961063608c3ec68ba *src/utils.h
+a23974ec10dcac46fa35421c789a909f *src/vector.h
+d16f40ca8b10582b3cd54e3bceedd568 *tests/testthat.R
+58e0ee74b734d9a1d7a6487f81711040 *tests/testthat/helper-capture.R
+8ee6bc6940e666fcf8d966cacdd623b4 *tests/testthat/helper-locale.R
+a20ff5a7133d7e0407ccd1ea4ff6034e *tests/testthat/helper-stack.R
+f39aa969618c1abb58b4f3f0ca828969 *tests/testthat/test-arg.R
+00ab0beaa7906914585f0f68bca334f9 *tests/testthat/test-attr.R
+401bef78b8d85991bf8eb2ef3a504b7a *tests/testthat/test-compat.R
+b2b0ce84a6a1a1958bdeb85ff6e32973 *tests/testthat/test-conditions.R
+f5cd5c0e09a4344d933cfb812b638b7d *tests/testthat/test-dictionary.R
+dbf409f79498787684f6573ee1d6a0ef *tests/testthat/test-dots.R
+c8ef7ffbf320fa6712f1417208e197e9 *tests/testthat/test-encoding.R
+f539828c6b65165db8d14a37e32cd0f7 *tests/testthat/test-env.R
+23f0564b9101b2409f843a979b2e018b *tests/testthat/test-eval.R
+ea755ed6b9be1b5b101a47eea861e866 *tests/testthat/test-fn.R
+8aef26e61b3bd3a1242f544545d1db3e *tests/testthat/test-formula.R
+7626bd3fe42f3d81bedaebd1a9c12c00 *tests/testthat/test-lang-call.R
+b91fa2f438568c1d6510e3a6497d8950 *tests/testthat/test-lang-expr.R
+3444dcbd93661f22999aa2544070cf03 *tests/testthat/test-lang.R
+45b0837f08b9a2d485a2d2c2be1ab9cf *tests/testthat/test-operators.R
+be988bf66c3673ded7a9ba5ee474ce4a *tests/testthat/test-parse.R
+9aa5a7d3440162f931853e2d3118d77d *tests/testthat/test-quo-enquo.R
+21489a8fb3068bdd2d00f4d17fa41c6e *tests/testthat/test-quosure.R
+48befc498a6ee454bffd036a8bdf6071 *tests/testthat/test-stack.R
+7bb827063f12cdde9ee68cabb5a74d51 *tests/testthat/test-tidy-capture.R
+a8171390c87680d0ad8737f3cc056fe8 *tests/testthat/test-tidy-eval.R
+b1f4b073912d27e34e65e6e4d22f45f1 *tests/testthat/test-tidy-unquote.R
+2bc782df7ad5fa0368bc31cffb412cca *tests/testthat/test-types-coercion.R
+c048b0fca9ca76d7eaa9c7942da5a612 *tests/testthat/test-types.R
+874c18cb5b85c2d779909b4ad8e3f5ff *tests/testthat/test-utils.R
+277d39e68a07d2a5753b6b8fc5c45d9f *tests/testthat/test-vector.R
+09a92018b8b48619da0b10517dfee564 *vignettes/releases/rlang-0.1.Rmd
+2feb5d2bc3cdd936bedfc6fee3dbd6b3 *vignettes/tidy-evaluation.Rmd
diff --git a/NAMESPACE b/NAMESPACE
new file mode 100644
index 0000000..c873f4a
--- /dev/null
+++ b/NAMESPACE
@@ -0,0 +1,371 @@
+# Generated by roxygen2: do not edit by hand
+
+S3method("$",dictionary)
+S3method("$<-",dictionary)
+S3method("[",quosures)
+S3method("[",stack)
+S3method("[[",dictionary)
+S3method("[[<-",dictionary)
+S3method(as_dictionary,"NULL")
+S3method(as_dictionary,data.frame)
+S3method(as_dictionary,default)
+S3method(as_dictionary,dictionary)
+S3method(as_dictionary,environment)
+S3method(c,quosures)
+S3method(has_binding,default)
+S3method(has_binding,environment)
+S3method(length,dictionary)
+S3method(names,dictionary)
+S3method(print,dictionary)
+S3method(print,frame)
+S3method(print,quosure)
+S3method(rst_muffle,default)
+S3method(rst_muffle,mufflable)
+S3method(rst_muffle,simpleMessage)
+S3method(rst_muffle,simpleWarning)
+S3method(str,dictionary)
+S3method(str,quosure)
+export("!!!")
+export("!!")
+export("%@%")
+export("%|%")
+export("%||%")
+export(":=")
+export("f_env<-")
+export("f_lhs<-")
+export("f_rhs<-")
+export("fn_env<-")
+export(.data)
+export(UQ)
+export(UQE)
+export(UQS)
+export(abort)
+export(are_na)
+export(arg_match)
+export(as_bytes)
+export(as_character)
+export(as_closure)
+export(as_complex)
+export(as_dictionary)
+export(as_double)
+export(as_env)
+export(as_function)
+export(as_integer)
+export(as_list)
+export(as_logical)
+export(as_native_character)
+export(as_native_string)
+export(as_overscope)
+export(as_pairlist)
+export(as_quosure)
+export(as_quosureish)
+export(as_string)
+export(as_utf8_character)
+export(as_utf8_string)
+export(base_env)
+export(bytes)
+export(bytes_along)
+export(bytes_len)
+export(call_depth)
+export(call_frame)
+export(call_inspect)
+export(call_stack)
+export(caller_env)
+export(caller_fn)
+export(caller_frame)
+export(child_env)
+export(chr)
+export(chr_along)
+export(chr_encoding)
+export(chr_len)
+export(cnd_abort)
+export(cnd_error)
+export(cnd_message)
+export(cnd_signal)
+export(cnd_warning)
+export(coerce_class)
+export(coerce_lang)
+export(coerce_type)
+export(cpl)
+export(cpl_along)
+export(cpl_len)
+export(ctxt_depth)
+export(ctxt_frame)
+export(ctxt_stack)
+export(current_frame)
+export(dbl)
+export(dbl_along)
+export(dbl_len)
+export(dots_definitions)
+export(dots_list)
+export(dots_n)
+export(dots_splice)
+export(dots_values)
+export(duplicate)
+export(empty_env)
+export(enexpr)
+export(enquo)
+export(env)
+export(env_bind)
+export(env_bind_exprs)
+export(env_bind_fns)
+export(env_bury)
+export(env_clone)
+export(env_depth)
+export(env_get)
+export(env_has)
+export(env_inherits)
+export(env_names)
+export(env_parent)
+export(env_parents)
+export(env_tail)
+export(env_unbind)
+export(eval_bare)
+export(eval_tidy)
+export(eval_tidy_)
+export(exiting)
+export(expr)
+export(expr_interp)
+export(expr_label)
+export(expr_name)
+export(expr_text)
+export(exprs)
+export(exprs_auto_name)
+export(f_env)
+export(f_label)
+export(f_lhs)
+export(f_name)
+export(f_rhs)
+export(f_text)
+export(flatten)
+export(flatten_chr)
+export(flatten_cpl)
+export(flatten_dbl)
+export(flatten_if)
+export(flatten_int)
+export(flatten_lgl)
+export(flatten_raw)
+export(fn_env)
+export(fn_fmls)
+export(fn_fmls_names)
+export(fn_fmls_syms)
+export(frame_position)
+export(friendly_type)
+export(get_env)
+export(get_expr)
+export(global_env)
+export(global_frame)
+export(has_length)
+export(has_name)
+export(have_name)
+export(inform)
+export(inplace)
+export(int)
+export(int_along)
+export(int_len)
+export(invoke)
+export(is_atomic)
+export(is_bare_atomic)
+export(is_bare_bytes)
+export(is_bare_character)
+export(is_bare_double)
+export(is_bare_env)
+export(is_bare_formula)
+export(is_bare_integer)
+export(is_bare_integerish)
+export(is_bare_list)
+export(is_bare_logical)
+export(is_bare_numeric)
+export(is_bare_raw)
+export(is_bare_string)
+export(is_bare_vector)
+export(is_binary_lang)
+export(is_bytes)
+export(is_call_stack)
+export(is_callable)
+export(is_character)
+export(is_chr_na)
+export(is_closure)
+export(is_copyable)
+export(is_cpl_na)
+export(is_dbl_na)
+export(is_definition)
+export(is_dictionary)
+export(is_dictionaryish)
+export(is_double)
+export(is_empty)
+export(is_env)
+export(is_eval_stack)
+export(is_expr)
+export(is_false)
+export(is_formula)
+export(is_formulaish)
+export(is_frame)
+export(is_function)
+export(is_installed)
+export(is_int_na)
+export(is_integer)
+export(is_integerish)
+export(is_lang)
+export(is_lgl_na)
+export(is_list)
+export(is_logical)
+export(is_missing)
+export(is_na)
+export(is_named)
+export(is_node)
+export(is_null)
+export(is_pairlist)
+export(is_primitive)
+export(is_primitive_eager)
+export(is_primitive_lazy)
+export(is_quosure)
+export(is_quosureish)
+export(is_quosures)
+export(is_raw)
+export(is_scalar_atomic)
+export(is_scalar_bytes)
+export(is_scalar_character)
+export(is_scalar_double)
+export(is_scalar_integer)
+export(is_scalar_integerish)
+export(is_scalar_list)
+export(is_scalar_logical)
+export(is_scalar_raw)
+export(is_scalar_vector)
+export(is_scoped)
+export(is_spliced)
+export(is_spliced_bare)
+export(is_stack)
+export(is_string)
+export(is_symbol)
+export(is_symbolic)
+export(is_syntactic_literal)
+export(is_true)
+export(is_unary_lang)
+export(is_vector)
+export(lang)
+export(lang_args)
+export(lang_args_names)
+export(lang_fn)
+export(lang_head)
+export(lang_modify)
+export(lang_name)
+export(lang_standardise)
+export(lang_tail)
+export(lang_type_of)
+export(lgl)
+export(lgl_along)
+export(lgl_len)
+export(list_along)
+export(list_len)
+export(ll)
+export(locally)
+export(maybe_missing)
+export(missing_arg)
+export(modify)
+export(mut_attrs)
+export(mut_latin1_locale)
+export(mut_mbcs_locale)
+export(mut_node_caar)
+export(mut_node_cadr)
+export(mut_node_car)
+export(mut_node_cdar)
+export(mut_node_cddr)
+export(mut_node_cdr)
+export(mut_node_tag)
+export(mut_utf8_locale)
+export(na_chr)
+export(na_cpl)
+export(na_dbl)
+export(na_int)
+export(na_lgl)
+export(names2)
+export(new_cnd)
+export(new_definition)
+export(new_environment)
+export(new_formula)
+export(new_function)
+export(new_language)
+export(new_overscope)
+export(new_quosure)
+export(node)
+export(node_caar)
+export(node_cadr)
+export(node_car)
+export(node_cdar)
+export(node_cddr)
+export(node_cdr)
+export(node_tag)
+export(ns_env)
+export(ns_env_name)
+export(ns_imports_env)
+export(overscope_clean)
+export(overscope_eval_next)
+export(parse_expr)
+export(parse_exprs)
+export(parse_quosure)
+export(parse_quosures)
+export(pkg_env)
+export(pkg_env_name)
+export(prepend)
+export(prim_name)
+export(quo)
+export(quo_expr)
+export(quo_is_lang)
+export(quo_is_missing)
+export(quo_is_null)
+export(quo_is_symbol)
+export(quo_is_symbolic)
+export(quo_label)
+export(quo_name)
+export(quo_text)
+export(quos)
+export(quos_auto_name)
+export(raw_along)
+export(raw_len)
+export(rep_along)
+export(restarting)
+export(return_from)
+export(return_to)
+export(rst_abort)
+export(rst_exists)
+export(rst_jump)
+export(rst_list)
+export(rst_maybe_jump)
+export(rst_muffle)
+export(scoped_env)
+export(scoped_envs)
+export(scoped_names)
+export(seq2)
+export(seq2_along)
+export(set_attrs)
+export(set_chr_encoding)
+export(set_env)
+export(set_expr)
+export(set_names)
+export(set_str_encoding)
+export(splice)
+export(squash)
+export(squash_chr)
+export(squash_cpl)
+export(squash_dbl)
+export(squash_if)
+export(squash_int)
+export(squash_lgl)
+export(squash_raw)
+export(stack_trim)
+export(str_encoding)
+export(string)
+export(switch_class)
+export(switch_lang)
+export(switch_type)
+export(sym)
+export(syms)
+export(type_of)
+export(warn)
+export(with_env)
+export(with_handlers)
+export(with_restarts)
+importFrom(utils,str)
+useDynLib(rlang, .registration = TRUE)
diff --git a/NEWS.md b/NEWS.md
new file mode 100644
index 0000000..db76322
--- /dev/null
+++ b/NEWS.md
@@ -0,0 +1,61 @@
+
+# rlang 0.1.2
+
+This hotfix release makes rlang compatible with the R 3.1 branch.
+
+
+# rlang 0.1.1
+
+This release includes two important fixes for tidy evaluation:
+
+* Bare formulas are now evaluated in the correct environment in
+  tidyeval functions.
+
+* `enquo()` now works properly within compiled functions. Before this
+  release, constants optimised by the bytecode compiler couldn't be
+  enquoted.
+
+
+## New functions:
+
+* The `new_environment()` constructor creates a child of the empty
+  environment and takes an optional named list of data to populate it.
+  Compared to `env()` and `child_env()`, it is meant to create
+  environments as data structures rather than as part of a scope
+  hierarchy.
+
+* The `new_language()` constructor creates calls out of a callable
+  object (a function or an expression) and a pairlist of arguments. It
+  is useful to avoid costly internal coercions between lists and
+  pairlists of arguments.
+
+
+## UI improvements:
+
+* `env_child()`'s first argument is now `.parent` instead of `parent`.
+
+* `mut_` setters like `mut_attrs()` and environment helpers like
+  `env_bind()` and `env_unbind()` now return their (modified) input
+  invisibly. This follows the tidyverse convention that functions
+  called primarily for their side effects should return their input
+  invisibly.
+
+* `is_pairlist()` now returns `TRUE` for `NULL`. We added `is_node()`
+  to test for actual pairlist nodes. In other words, `is_pairlist()`
+  tests for the data structure while `is_node()` tests for the type.
+
+
+## Bugfixes:
+
+* `env()` and `env_child()` can now get arguments whose names start
+  with `.`.  Prior to this fix, these arguments were partial-matching
+  on `env_bind()`'s `.env` argument.
+
+* The internal `replace_na()` symbol was renamed to avoid a collision
+  with an exported function in tidyverse. This solves an issue
+  occurring in old versions of R prior to 3.3.2 (#133).
+
+
+# rlang 0.1.0
+
+Initial release.
diff --git a/R/arg.R b/R/arg.R
new file mode 100644
index 0000000..b5a4696
--- /dev/null
+++ b/R/arg.R
@@ -0,0 +1,155 @@
+#' Match an argument to a character vector
+#'
+#' @description
+#'
+#' This is equivalent to [base::match.arg()] with a few differences:
+#'
+#' * Partial matches trigger an error.
+#'
+#' * Error messages are a bit more informative and obey the tidyverse
+#'   standards.
+#'
+#' @param arg A symbol referring to an argument accepting strings.
+#' @param values The possible values that `arg` can take. If `NULL`,
+#'   the values are taken from the function definition of the [caller
+#'   frame][caller_frame].
+#' @return The string supplied to `arg`.
+#' @export
+#' @examples
+#' fn <- function(x = c("foo", "bar")) arg_match(x)
+#' fn("bar")
+#'
+#' # This would throw an informative error if run:
+#' # fn("b")
+#' # fn("baz")
+arg_match <- function(arg, values = NULL) {
+  arg_expr <- enexpr(arg)
+  if (!is_symbol(arg_expr)) {
+    abort("Internal error: `arg_match()` expects a symbol")
+  }
+
+  arg_nm <- as_string(arg_expr)
+
+  if (is_null(values)) {
+    fn <- caller_fn()
+    values <- fn_fmls(fn)[[arg_nm]]
+    values <- eval_bare(values, get_env(fn))
+  }
+  if (!is_character(values)) {
+    abort("Internal error: `values` must be a character vector")
+  }
+  if (!is_character(arg)) {
+    abort(paste0(chr_quoted(arg_nm), " must be a character vector"))
+  }
+
+  arg <- arg[[1]]
+  i <- match(arg, values)
+
+  if (is_na(i)) {
+    msg <- paste0(chr_quoted(arg_nm), " should be one of: ")
+    msg <- paste0(msg, chr_enumerate(chr_quoted(values, "\"")))
+
+    i_partial <- pmatch(arg, values)
+    if (!is_na(i_partial)) {
+      candidate <- values[[i_partial]]
+      candidate <- chr_quoted(candidate, "\"")
+      msg <- paste0(msg, "\n", "Did you mean ", candidate, "?")
+    }
+
+    abort(msg)
+  }
+
+  values[[i]]
+}
+
+chr_quoted <- function(chr, type = "`") {
+  paste0(type, chr, type)
+}
+chr_enumerate <- function(chr, sep = ", ") {
+  if (length(chr) < 2) {
+    return(chr)
+  }
+  n <- length(chr)
+  head <- chr[seq_len(n - 1)]
+  last <- chr[length(chr)]
+
+  head <- paste(head, collapse = ", ")
+  paste(head, "or", last)
+}
+
+#' Generate or handle a missing argument
+#'
+#' These functions help using the missing argument as a regular R
+#' object. It is valid to generate a missing argument and assign it in
+#' the current environment or in a list. However, once assigned in the
+#' environment, the missing argument normally cannot be touched.
+#' `maybe_missing()` checks whether the object is the missing
+#' argument, and regenerate it if needed to prevent R from throwing a
+#' missing error. In addition, `is_missing()` lets you check for a
+#' missing argument in a larger range of situations than
+#' [base::missing()] (see examples).
+#' @param x An object that might be the missing argument.
+#' @export
+#' @examples
+#' # The missing argument can be useful to generate calls
+#' quo(f(x = !! missing_arg()))
+#' quo(f(x = !! NULL))
+#'
+#'
+#' # It is perfectly valid to generate and assign the missing
+#' # argument.
+#' x <- missing_arg()
+#' l <- list(missing_arg())
+#'
+#' # Note that accessing a missing argument contained in a list does
+#' # not trigger an error:
+#' l[[1]]
+#' is.null(l[[1]])
+#'
+#' # But if the missing argument is assigned in the current
+#' # environment, it is no longer possible to touch it. The following
+#' # lines would all return errors:
+#' #> x
+#' #> is.null(x)
+#'
+#' # In these cases, you can use maybe_missing() to manipulate an
+#' # object that might be the missing argument without triggering a
+#' # missing error:
+#' maybe_missing(x)
+#' is.null(maybe_missing(x))
+#' is_missing(maybe_missing(x))
+#'
+#'
+#' # base::missing() does not work well if you supply an
+#' # expression. The following lines would throw an error:
+#'
+#' #> missing(missing_arg())
+#' #> missing(l[[1]])
+#'
+#' # while is_missing() will work as expected:
+#' is_missing(missing_arg())
+#' is_missing(l[[1]])
+missing_arg <- function() {
+  quote(expr = )
+}
+
+#' @rdname missing_arg
+#' @export
+is_missing <- function(x) {
+  expr <- substitute(x)
+  if (typeof(expr) == "symbol" && missing(x)) {
+    TRUE
+  } else {
+    identical(x, missing_arg())
+  }
+}
+
+#' @rdname missing_arg
+#' @export
+maybe_missing <- function(x) {
+  if (is_missing(x) || quo_is_missing(x)) {
+    missing_arg()
+  } else {
+    x
+  }
+}
diff --git a/R/attr.R b/R/attr.R
new file mode 100644
index 0000000..f3f7dea
--- /dev/null
+++ b/R/attr.R
@@ -0,0 +1,282 @@
+#' Add attributes to an object
+#'
+#' `set_attrs()` adds, changes, or zaps attributes of objects. Pass a
+#' single unnamed `NULL` as argument to zap all attributes. For
+#' [uncopyable][is_copyable] types, use `mut_attrs()`.
+#'
+#' Unlike [structure()], these setters have no special handling of
+#' internal attributes names like `.Dim`, `.Dimnames` or `.Names`.
+#'
+#' @param .x An object to decorate with attributes.
+#' @param ... A list of named attributes. These have [explicit
+#'   splicing semantics][dots_list]. Pass a single unnamed `NULL` to
+#'   zap all attributes from `.x`.
+#' @return `set_attrs()` returns a modified [shallow copy][duplicate]
+#'   of `.x`. `mut_attrs()` invisibly returns the original `.x`
+#'   modified in place.
+#' @export
+#' @examples
+#' set_attrs(letters, names = 1:26, class = "my_chr")
+#'
+#' # Splice a list of attributes:
+#' attrs <- list(attr = "attr", names = 1:26, class = "my_chr")
+#' obj <- set_attrs(letters, splice(attrs))
+#' obj
+#'
+#' # Zap attributes by passing a single unnamed NULL argument:
+#' set_attrs(obj, NULL)
+#' set_attrs(obj, !!! list(NULL))
+#'
+#' # Note that set_attrs() never modifies objects in place:
+#' obj
+#'
+#' # For uncopyable types, mut_attrs() lets you modify in place:
+#' env <- env()
+#' mut_attrs(env, foo = "bar")
+#' env
+set_attrs <- function(.x, ...) {
+  if (!is_copyable(.x)) {
+    abort("`.x` is uncopyable: use `mut_attrs()` to change attributes in place")
+  }
+  set_attrs_impl(.x, ...)
+}
+#' @rdname set_attrs
+#' @export
+mut_attrs <- function(.x, ...) {
+  if (is_copyable(.x)) {
+    abort("`.x` is copyable: use `set_attrs()` to change attributes without side effect")
+  }
+  invisible(set_attrs_impl(.x, ...))
+}
+set_attrs_impl <- function(.x, ...) {
+  attrs <- dots_list(...)
+
+  # If passed a single unnamed NULL, zap attributes
+  if (identical(attrs, set_attrs_null)) {
+    attributes(.x) <- NULL
+  } else {
+    attributes(.x) <- c(attributes(.x), attrs)
+  }
+
+  .x
+}
+set_attrs_null <- list(NULL)
+names(set_attrs_null) <- ""
+
+#' Is object named?
+#'
+#' `is_named()` checks that `x` has names attributes, and that none of
+#' the names are missing or empty (`NA` or `""`). `is_dictionaryish()`
+#' checks that an object is a dictionary: that it has actual names and
+#' in addition that there are no duplicated names. `have_name()`
+#' is a vectorised version of `is_named()`.
+#'
+#' @param x An object to test.
+#' @return `is_named()` and `is_dictionaryish()` are scalar predicates
+#'   and return `TRUE` or `FALSE`. `have_name()` is vectorised and
+#'   returns a logical vector as long as the input.
+#' @export
+#' @examples
+#' # A data frame usually has valid, unique names
+#' is_named(mtcars)
+#' have_name(mtcars)
+#' is_dictionaryish(mtcars)
+#'
+#' # But data frames can also have duplicated columns:
+#' dups <- cbind(mtcars, cyl = seq_len(nrow(mtcars)))
+#' is_dictionaryish(dups)
+#'
+#' # The names are still valid:
+#' is_named(dups)
+#' have_name(dups)
+#'
+#'
+#' # For empty objects the semantics are slightly different.
+#' # is_dictionaryish() returns TRUE for empty objects:
+#' is_dictionaryish(list())
+#'
+#' # But is_named() will only return TRUE if there is a names
+#' # attribute (a zero-length character vector in this case):
+#' x <- set_names(list(), character(0))
+#' is_named(x)
+#'
+#'
+#' # Empty and missing names are invalid:
+#' invalid <- dups
+#' names(invalid)[2] <- ""
+#' names(invalid)[5] <- NA
+#'
+#' # is_named() performs a global check while have_name() can show you
+#' # where the problem is:
+#' is_named(invalid)
+#' have_name(invalid)
+#'
+#' # have_name() will work even with vectors that don't have a names
+#' # attribute:
+#' have_name(letters)
+is_named <- function(x) {
+  nms <- names(x)
+
+  if (is_null(nms)) {
+    return(FALSE)
+  }
+
+  if (any(nms == "" | is.na(nms))) {
+    return(FALSE)
+  }
+
+  TRUE
+}
+#' @rdname is_named
+#' @export
+is_dictionaryish <- function(x) {
+  if (!length(x)) {
+    return(!is.null(x))
+  }
+
+  is_named(x) && !any(duplicated(names(x)))
+}
+#' @rdname is_named
+#' @export
+have_name <- function(x) {
+  nms <- names(x)
+  if (is.null(nms)) {
+    rep(FALSE, length(x))
+  } else {
+    !(is.na(nms) | nms == "")
+  }
+}
+
+#' Does an object have an element with this name?
+#'
+#' This function returns a logical value that indicates if a data frame or
+#' another named object contains an element with a specific name.
+#'
+#' Unnamed objects are treated as if all names are empty strings. `NA`
+#' input gives `FALSE` as output.
+#'
+#' @param x A data frame or another named object
+#' @param name Element name(s) to check
+#' @return A logical vector of the same length as `name`
+#' @examples
+#' has_name(iris, "Species")
+#' has_name(mtcars, "gears")
+#' @export
+has_name <- function(x, name) {
+  name %in% names2(x)
+}
+
+#' Set names of a vector
+#'
+#' This is equivalent to [stats::setNames()], with more features and
+#' stricter argument checking.
+#'
+#' @param x Vector to name.
+#' @param nm,... Vector of names, the same length as `x`.
+#'
+#'   You can specify names in the following ways:
+#'
+#'   * If you do nothing, `x` will be named with itself.
+#'
+#'   * If `x` already has names, you can provide a function or formula
+#'     to transform the existing names. In that case, `...` is passed
+#'     to the function.
+#'
+#'   * If `nm` is `NULL`, the names are removed (if present).
+#'
+#'   * In all other cases, `nm` and `...` are passed to [chr()]. This
+#'     gives implicit splicing semantics: you can pass character
+#'     vectors or list of character vectors indistinctly.
+#' @export
+#' @examples
+#' set_names(1:4, c("a", "b", "c", "d"))
+#' set_names(1:4, letters[1:4])
+#' set_names(1:4, "a", "b", "c", "d")
+#'
+#' # If the second argument is ommitted a vector is named with itself
+#' set_names(letters[1:5])
+#'
+#' # Alternatively you can supply a function
+#' set_names(1:10, ~ letters[seq_along(.)])
+#' set_names(head(mtcars), toupper)
+#'
+#' # `...` is passed to the function:
+#' set_names(head(mtcars), paste0, "_foo")
+set_names <- function(x, nm = x, ...) {
+  if (!is_vector(x)) {
+    abort("`x` must be a vector")
+  }
+
+  if (is_function(nm) || is_formula(nm)) {
+    nm <- as_function(nm)
+    nm <- nm(names2(x), ...)
+  } else if (!is_null(nm)) {
+    # Make sure `x` is serialised when no arguments is provided.
+    nm <- as.character(nm)
+    nm <- chr(nm, ...)
+  }
+
+  if (!is_null(nm) && !is_character(nm, length(x))) {
+    abort("`nm` must be `NULL` or a character vector the same length as `x`")
+  }
+
+  names(x) <- nm
+  x
+}
+
+#' Get names of a vector
+#'
+#' This names getter always returns a character vector, even when an
+#' object does not have a `names` attribute. In this case, it returns
+#' a vector of empty names `""`. It also standardises missing names to
+#' `""`.
+#'
+#' @param x A vector.
+#' @export
+#' @examples
+#' names2(letters)
+#'
+#' # It also takes care of standardising missing names:
+#' x <- set_names(1:3, c("a", NA, "b"))
+#' names2(x)
+names2 <- function(x) {
+  if (type_of(x) == "environment") abort("Use env_names() for environments.")
+  nms <- names(x)
+  if (is_null(nms)) {
+    rep("", length(x))
+  } else {
+    nms %|% ""
+  }
+}
+
+length_ <- function(x) {
+  .Call(rlang_length, x)
+}
+
+#' How long is an object?
+#'
+#' This is a function for the common task of testing the length of an
+#' object. It checks the length of an object in a non-generic way:
+#' [base::length()] methods are ignored.
+#'
+#' @param x A R object.
+#' @param n A specific length to test `x` with. If `NULL`,
+#'   `has_length()` returns `TRUE` if `x` has length greater than
+#'   zero, and `FALSE` otherwise.
+#' @export
+#' @examples
+#' has_length(list())
+#' has_length(list(), 0)
+#'
+#' has_length(letters)
+#' has_length(letters, 20)
+#' has_length(letters, 26)
+has_length <- function(x, n = NULL) {
+  len <- .Call(rlang_length, x)
+
+  if (is_null(n)) {
+    as.logical(len)
+  } else {
+    len == n
+  }
+}
diff --git a/R/cnd-handlers.R b/R/cnd-handlers.R
new file mode 100644
index 0000000..705b3d3
--- /dev/null
+++ b/R/cnd-handlers.R
@@ -0,0 +1,235 @@
+#' Establish handlers on the stack
+#'
+#' Condition handlers are functions established on the evaluation
+#' stack (see [ctxt_stack()]) that are called by R when a condition is
+#' signalled (see [cnd_signal()] and [abort()] for two common signal
+#' functions). They come in two types: exiting handlers, which jump
+#' out of the signalling context and are transferred to
+#' `with_handlers()` before being executed. And inplace handlers,
+#' which are executed within the signal functions.
+#'
+#' An exiting handler is taking charge of the condition. No other
+#' handler on the stack gets a chance to handle the condition. The
+#' handler is executed and `with_handlers()` returns the return value
+#' of that handler. On the other hand, in place handlers do not
+#' necessarily take charge. If they return normally, they decline to
+#' handle the condition, and R looks for other handlers established on
+#' the evaluation stack. Only by jumping to an earlier call frame can
+#' an inplace handler take charge of the condition and stop the
+#' signalling process. Sometimes, a muffling restart has been
+#' established for the purpose of jumping out of the signalling
+#' function but not out of the context where the condition was
+#' signalled, which allows execution to resume normally. See
+#' [rst_muffle()] the `muffle` argument of [inplace()] and the
+#' `mufflable` argument of [cnd_signal()].
+#'
+#' Exiting handlers are established first by `with_handlers()`, and in
+#' place handlers are installed in second place. The latter handlers
+#' thus take precedence over the former.
+#'
+#' @inheritParams with_restarts
+#' @param .expr An expression to execute in a context where new
+#'   handlers are established. The underscored version takes a quoted
+#'   expression or a quoted formula.
+#' @param ... Named handlers. Handlers should inherit from `exiting`
+#'   or `inplace`. See [exiting()] and [inplace()] for constructing
+#'   such handlers. Dots are evaluated with [explicit
+#'   splicing][dots_list].
+#' @seealso [exiting()], [inplace()].
+#' @export
+#' @examples
+#' # Signal a condition with cnd_signal():
+#' fn <- function() {
+#'   g()
+#'   cat("called?\n")
+#'   "fn() return value"
+#' }
+#' g <- function() {
+#'   h()
+#'   cat("called?\n")
+#' }
+#' h <- function() {
+#'   cnd_signal("foo")
+#'   cat("called?\n")
+#' }
+#'
+#' # Exiting handlers jump to with_handlers() before being
+#' # executed. Their return value is handed over:
+#' handler <- function(c) "handler return value"
+#' with_handlers(fn(), foo = exiting(handler))
+#'
+#' # In place handlers are called in turn and their return value is
+#' # ignored. Returning just means they are declining to take charge of
+#' # the condition. However, they can produce side-effects such as
+#' # displaying a message:
+#' some_handler <- function(c) cat("some handler!\n")
+#' other_handler <- function(c) cat("other handler!\n")
+#' with_handlers(fn(), foo = inplace(some_handler), foo = inplace(other_handler))
+#'
+#' # If an in place handler jumps to an earlier context, it takes
+#' # charge of the condition and no other handler gets a chance to
+#' # deal with it. The canonical way of transferring control is by
+#' # jumping to a restart. See with_restarts() and restarting()
+#' # documentation for more on this:
+#' exiting_handler <- function(c) rst_jump("rst_foo")
+#' fn2 <- function() {
+#'   with_restarts(g(), rst_foo = function() "restart value")
+#' }
+#' with_handlers(fn2(), foo = inplace(exiting_handler), foo = inplace(other_handler))
+with_handlers <- function(.expr, ...) {
+  quo <- enquo(.expr)
+  handlers <- dots_list(...)
+
+  inplace <- keep(handlers, inherits, "inplace")
+  exiting <- keep(handlers, inherits, "exiting")
+
+  if (length(handlers) > length(exiting) + length(inplace)) {
+    abort("all handlers should inherit from `exiting` or `inplace`")
+  }
+  if (length(exiting)) {
+    quo <- quo(tryCatch(!! quo, !!! exiting))
+  }
+  if (length(inplace)) {
+    quo <- quo(withCallingHandlers(!! quo, !!! inplace))
+  }
+
+  eval_tidy(quo)
+}
+
+#' Create an exiting or in place handler
+#'
+#' There are two types of condition handlers: exiting handlers, which
+#' are thrown to the place where they have been established (e.g.,
+#' [with_handlers()]'s evaluation frame), and local handlers, which
+#' are executed in place (e.g., where the condition has been
+#' signalled). `exiting()` and `inplace()` create handlers suitable
+#' for [with_handlers()].
+#'
+#' A subtle point in the R language is that conditions are not thrown,
+#' handlers are. [base::tryCatch()] and [with_handlers()] actually
+#' catch handlers rather than conditions. When a critical condition
+#' signalled with [base::stop()] or [abort()], R inspects the handler
+#' stack and looks for a handler that can deal with the condition. If
+#' it finds an exiting handler, it throws it to the function that
+#' established it ([with_handlers()]). That is, it interrupts the
+#' normal course of evaluation and jumps to `with_handlers()`
+#' evaluation frame (see [ctxt_stack()]), and only then and there the
+#' handler is called. On the other hand, if R finds an inplace
+#' handler, it executes it locally. The inplace handler can choose to
+#' handle the condition by jumping out of the frame (see [rst_jump()]
+#' or [return_from()]). If it returns locally, it declines to handle
+#' the condition which is passed to the next relevant handler on the
+#' stack. If no handler is found or is able to deal with the critical
+#' condition (by jumping out of the frame), R will then jump out of
+#' the faulty evaluation frame to top-level, via the abort restart
+#' (see [rst_abort()]).
+#'
+#' @param handler A handler function that takes a condition as
+#'   argument. This is passed to [as_function()] and can thus be a
+#'   formula describing a lambda function.
+#' @param muffle Whether to muffle the condition after executing an
+#'   inplace handler. The signalling function must have established a
+#'   muffling restart. Otherwise, an error will be issued.
+#' @seealso [with_handlers()] for examples, [restarting()] for another
+#'   kind of inplace handler.
+#' @export
+#' @examples
+#' # You can supply a function taking a condition as argument:
+#' hnd <- exiting(function(c) cat("handled foo\n"))
+#' with_handlers(cnd_signal("foo"), foo = hnd)
+#'
+#' # Or a lambda-formula where "." is bound to the condition:
+#' with_handlers(foo = inplace(~cat("hello", .$attr, "\n")), {
+#'   cnd_signal("foo", attr = "there")
+#'   "foo"
+#' })
+exiting <- function(handler) {
+  handler <- as_function(handler)
+  structure(handler, class = c("exiting", "handler"))
+}
+#' @rdname exiting
+#' @export
+inplace <- function(handler, muffle = FALSE) {
+  handler <- as_function(handler)
+  if (muffle) {
+    handler_ <- function(c) {
+      handler(c)
+      rst_muffle(c)
+    }
+  } else {
+    handler_ <- handler
+  }
+  structure(handler_, class = c("inplace", "handler"))
+}
+
+#' Create a restarting handler
+#'
+#' This constructor automates the common task of creating an
+#' [inplace()] handler that invokes a restart.
+#'
+#' Jumping to a restart point from an inplace handler has two
+#' effects. First, the control flow jumps to wherever the restart was
+#' established, and the restart function is called (with `...`, or
+#' `.fields` as arguments). Execution resumes from the
+#' [with_restarts()] call. Secondly, the transfer of the control flow
+#' out of the function that signalled the condition means that the
+#' handler has dealt with the condition. Thus the condition will not
+#' be passed on to other potential handlers established on the stack.
+#'
+#' @param .restart The name of a restart.
+#' @param .fields A character vector specifying the fields of the
+#'   condition that should be passed as arguments to the restart. If
+#'   named, the names (except empty names `""`) are used as
+#'   argument names for calling the restart function. Otherwise the
+#'   the fields themselves are used as argument names.
+#' @param ... Additional arguments passed on the restart
+#'   function. These arguments are evaluated only once and
+#'   immediately, when creating the restarting handler. Furthermore,
+#'   they are evaluated with [explicit splicing][dots_list].
+#' @export
+#' @seealso [inplace()] and [exiting()].
+#' @examples
+#' # This is a restart that takes a data frame and names as arguments
+#' rst_bar <- function(df, nms) {
+#'   stats::setNames(df, nms)
+#' }
+#'
+#' # This restart is simpler and does not take arguments
+#' rst_baz <- function() "baz"
+#'
+#' # Signalling a condition parameterised with a data frame
+#' fn <- function() {
+#'   with_restarts(cnd_signal("foo", foo_field = mtcars),
+#'     rst_bar = rst_bar,
+#'     rst_baz = rst_baz
+#'   )
+#' }
+#'
+#' # Creating a restarting handler that passes arguments `nms` and
+#' # `df`, the latter taken from a data field of the condition object
+#' restart_bar <- restarting("rst_bar",
+#'   nms = LETTERS[1:11], .fields = c(df = "foo_field")
+#' )
+#'
+#' # The restarting handlers jumps to `rst_bar` when `foo` is signalled:
+#' with_handlers(fn(), foo = restart_bar)
+#'
+#' # The restarting() constructor is especially nice to use with
+#' # restarts that do not need arguments:
+#' with_handlers(fn(), foo = restarting("rst_baz"))
+restarting <- function(.restart, ..., .fields = NULL) {
+  stopifnot(is_scalar_character(.restart))
+  if (!is_null(.fields)) {
+    .fields <- set_names2(.fields)
+    stopifnot(is_character(.fields) && is_dictionaryish(.fields))
+  }
+
+  args <- dots_list(...)
+  handler <- function(c) {
+    fields <- set_names(c[.fields], names(.fields))
+    rst_args <- c(fields, args)
+    do.call("rst_jump", c(list(.restart = .restart), rst_args))
+  }
+
+  structure(handler, class = c("restarting", "inplace", "handler"))
+}
diff --git a/R/cnd-restarts.R b/R/cnd-restarts.R
new file mode 100644
index 0000000..dc88dbf
--- /dev/null
+++ b/R/cnd-restarts.R
@@ -0,0 +1,283 @@
+#' Establish a restart point on the stack
+#'
+#' Restart points are named functions that are established with
+#' `with_restarts()`. Once established, you can interrupt the normal
+#' execution of R code, jump to the restart, and resume execution from
+#' there. Each restart is established along with a restart function
+#' that is executed after the jump and that provides a return value
+#' from the establishing point (i.e., a return value for
+#' `with_restarts()`).
+#'
+#' Restarts are not the only way of jumping to a previous call frame
+#' (see [return_from()] or [return_to()]). However, they have the advantage of
+#' being callable by name once established.
+#'
+#' @param .expr An expression to execute with new restarts established
+#'   on the stack. This argument is passed by expression and supports
+#'   [unquoting][quasiquotation]. It is evaluated in a context where
+#'   restarts are established.
+#' @param ... Named restart functions. The name is taken as the
+#'   restart name and the function is executed after the jump. These
+#'   dots are evaluated with [explicit splicing][dots_list].
+#' @seealso [return_from()] and [return_to()] for a more flexible way
+#'   of performing a non-local jump to an arbitrary call frame.
+#' @export
+#' @examples
+#' # Restarts are not the only way to jump to a previous frame, but
+#' # they have the advantage of being callable by name:
+#' fn <- function() with_restarts(g(), my_restart = function() "returned")
+#' g <- function() h()
+#' h <- function() { rst_jump("my_restart"); "not returned" }
+#' fn()
+#'
+#' # Whereas a non-local return requires to manually pass the calling
+#' # frame to the return function:
+#' fn <- function() g(get_env())
+#' g <- function(env) h(env)
+#' h <- function(env) { return_from(env, "returned"); "not returned" }
+#' fn()
+#'
+#'
+#' # rst_maybe_jump() checks that a restart exists before trying to jump:
+#' fn <- function() {
+#'   g()
+#'   cat("will this be called?\n")
+#' }
+#' g <- function() {
+#'   rst_maybe_jump("my_restart")
+#'   cat("will this be called?\n")
+#' }
+#'
+#' # Here no restart are on the stack:
+#' fn()
+#'
+#' # If a restart point called `my_restart` was established on the
+#' # stack before calling fn(), the control flow will jump there:
+#' rst <- function() {
+#'   cat("restarting...\n")
+#'   "return value"
+#' }
+#' with_restarts(fn(), my_restart = rst)
+#'
+#'
+#' # Restarts are particularly useful to provide alternative default
+#' # values when the normal output cannot be computed:
+#'
+#' fn <- function(valid_input) {
+#'   if (valid_input) {
+#'     return("normal value")
+#'   }
+#'
+#'   # We decide to return the empty string "" as default value. An
+#'   # altenative strategy would be to signal an error. In any case,
+#'   # we want to provide a way for the caller to get a different
+#'   # output. For this purpose, we provide two restart functions that
+#'   # returns alternative defaults:
+#'   restarts <- list(
+#'     rst_empty_chr = function() character(0),
+#'     rst_null = function() NULL
+#'   )
+#'
+#'   with_restarts(splice(restarts), .expr = {
+#'
+#'     # Signal a typed condition to let the caller know that we are
+#'     # about to return an empty string as default value:
+#'     cnd_signal("default_empty_string")
+#'
+#'     # If no jump to with_restarts, return default value:
+#'     ""
+#'   })
+#' }
+#'
+#' # Normal value for valid input:
+#' fn(TRUE)
+#'
+#' # Default value for bad input:
+#' fn(FALSE)
+#'
+#' # Change the default value if you need an empty character vector by
+#' # defining an inplace handler that jumps to the restart. It has to
+#' # be inplace because exiting handlers jump to the place where they
+#' # are established before being executed, and the restart is not
+#' # defined anymore at that point:
+#' rst_handler <- inplace(function(c) rst_jump("rst_empty_chr"))
+#' with_handlers(fn(FALSE), default_empty_string = rst_handler)
+#'
+#' # You can use restarting() to create restarting handlers easily:
+#' with_handlers(fn(FALSE), default_empty_string = restarting("rst_null"))
+with_restarts <- function(.expr, ...) {
+  quo <- quo(withRestarts(expr = !! enquo(.expr), !!! dots_list(...)))
+  eval_tidy(quo)
+}
+
+
+#' Restarts utilities
+#'
+#' Restarts are named jumping points established by [with_restarts()].
+#' `rst_list()` returns the names of all restarts currently
+#' established. `rst_exists()` checks if a given restart is
+#' established. `rst_jump()` stops execution of the current function
+#' and jumps to a restart point. If the restart does not exist, an
+#' error is thrown.  `rst_maybe_jump()` first checks that a restart
+#' exists before jumping.
+#'
+#' @param .restart The name of a restart.
+#' @param ... Arguments passed on to the restart function. These
+#'   dots are evaluated with [explicit splicing][dots_list].
+#' @seealso [with_restarts()], [rst_muffle()].
+#' @export
+rst_list <- function() {
+  computeRestarts()
+}
+#' @rdname rst_list
+#' @export
+rst_exists <- function(.restart) {
+  !is.null(findRestart(.restart))
+}
+#' @rdname rst_list
+#' @export
+rst_jump <- function(.restart, ...) {
+  args <- c(list(r = .restart), dots_list(...))
+  do.call("invokeRestart", args)
+}
+#' @rdname rst_list
+#' @export
+rst_maybe_jump <- function(.restart, ...) {
+  if (rst_exists(.restart)) {
+    args <- c(list(r = .restart), dots_list(...))
+    do.call("invokeRestart", args)
+  }
+}
+
+#' Jump to the abort restart
+#'
+#' The abort restart is the only restart that is established at top
+#' level. It is used by R as a top-level target, most notably when an
+#' error is issued (see [abort()]) that no handler is able
+#' to deal with (see [with_handlers()]).
+#'
+#' @seealso [rst_jump()], [abort()] and [cnd_abort()].
+#' @export
+#' @examples
+#' # The `abort` restart is a bit special in that it is always
+#' # registered in a R session. You will always find it on the restart
+#' # stack because it is established at top level:
+#' rst_list()
+#'
+#' # You can use the `above` restart to jump to top level without
+#' # signalling an error:
+#' \dontrun{
+#' fn <- function() {
+#'   cat("aborting...\n")
+#'   rst_abort()
+#'   cat("This is never called\n")
+#' }
+#' {
+#'   fn()
+#'   cat("This is never called\n")
+#' }
+#' }
+#'
+#' # The `above` restart is the target that R uses to jump to top
+#' # level when critical errors are signalled:
+#' \dontrun{
+#' {
+#'   abort("error")
+#'   cat("This is never called\n")
+#' }
+#' }
+#'
+#' # If another `abort` restart is specified, errors are signalled as
+#' # usual but then control flow resumes with from the new restart:
+#' \dontrun{
+#' out <- NULL
+#' {
+#'   out <- with_restarts(abort("error"), abort = function() "restart!")
+#'   cat("This is called\n")
+#' }
+#' cat("`out` has now become:", out, "\n")
+#' }
+rst_abort <- function() {
+  rst_jump("abort")
+}
+
+#' Jump to a muffling restart
+#'
+#' Muffle restarts are established at the same location as where a
+#' condition is signalled. They are useful for two non-exclusive
+#' purposes: muffling signalling functions and muffling conditions. In
+#' the first case, `rst_muffle()` prevents any further side effects of
+#' a signalling function (a warning or message from being displayed,
+#' an aborting jump to top level, etc). In the second case, the
+#' muffling jump prevents a condition from being passed on to other
+#' handlers. In both cases, execution resumes normally from the point
+#' where the condition was signalled.
+#'
+#' @param c A condition to muffle.
+#' @seealso The `muffle` argument of [inplace()], and the `mufflable`
+#'   argument of [cnd_signal()].
+#' @export
+#' @examples
+#' side_effect <- function() cat("side effect!\n")
+#' handler <- inplace(function(c) side_effect())
+#'
+#' # A muffling handler is an inplace handler that jumps to a muffle
+#' # restart:
+#' muffling_handler <- inplace(function(c) {
+#'   side_effect()
+#'   rst_muffle(c)
+#' })
+#'
+#' # You can also create a muffling handler simply by setting
+#' # muffle = TRUE:
+#' muffling_handler <- inplace(function(c) side_effect(), muffle = TRUE)
+#'
+#' # You can then muffle the signalling function:
+#' fn <- function(signal, msg) {
+#'   signal(msg)
+#'   "normal return value"
+#' }
+#' with_handlers(fn(message, "some message"), message = handler)
+#' with_handlers(fn(message, "some message"), message = muffling_handler)
+#' with_handlers(fn(warning, "some warning"), warning = muffling_handler)
+#'
+#' # Note that exiting handlers are thrown to the establishing point
+#' # before being executed. At that point, the restart (established
+#' # within the signalling function) does not exist anymore:
+#' \dontrun{
+#' with_handlers(fn(warning, "some warning"),
+#'   warning = exiting(function(c) rst_muffle(c)))
+#' }
+#'
+#'
+#' # Another use case for muffle restarts is to muffle conditions
+#' # themselves. That is, to prevent other condition handlers from
+#' # being called:
+#' undesirable_handler <- inplace(function(c) cat("please don't call me\n"))
+#'
+#' with_handlers(foo = undesirable_handler,
+#'   with_handlers(foo = muffling_handler, {
+#'     cnd_signal("foo", mufflable = TRUE)
+#'     "return value"
+#'   }))
+#'
+#' # See the `mufflable` argument of cnd_signal() for more on this point
+rst_muffle <- function(c) {
+  UseMethod("rst_muffle")
+}
+#' @export
+rst_muffle.default <- function(c) {
+  abort("No muffle restart defined for this condition", "control")
+}
+#' @export
+rst_muffle.simpleMessage <- function(c) {
+  rst_jump("muffleMessage")
+}
+#' @export
+rst_muffle.simpleWarning <- function(c) {
+  rst_jump("muffleWarning")
+}
+#' @export
+rst_muffle.mufflable <- function(c) {
+  rst_jump("muffle")
+}
diff --git a/R/cnd.R b/R/cnd.R
new file mode 100644
index 0000000..2540372
--- /dev/null
+++ b/R/cnd.R
@@ -0,0 +1,293 @@
+#' Create a condition object
+#'
+#' These constructors make it easy to create subclassed conditions.
+#' Conditions are objects that power the error system in R. They can
+#' also be used for passing messages to pre-established handlers.
+#'
+#' `new_cnd()` creates objects inheriting from `condition`. Conditions
+#' created with `cnd_error()`, `cnd_warning()` and `cnd_message()`
+#' inherit from `error`, `warning` or `message`.
+#'
+#' @param .type The condition subclass.
+#' @param ... Named data fields stored inside the condition
+#'   object. These dots are evaluated with [explicit
+#'   splicing][dots_list].
+#' @param .msg A default message to inform the user about the
+#'   condition when it is signalled.
+#' @seealso [cnd_signal()], [with_handlers()].
+#' @export
+#' @examples
+#' # Create a condition inheriting from the s3 type "foo":
+#' cnd <- new_cnd("foo")
+#'
+#' # Signal the condition to potential handlers. This has no effect if no
+#' # handler is registered to deal with conditions of type "foo":
+#' cnd_signal(cnd)
+#'
+#' # If a relevant handler is on the current evaluation stack, it will be
+#' # called by cnd_signal():
+#' with_handlers(cnd_signal(cnd), foo = exiting(function(c) "caught!"))
+#'
+#' # Handlers can be thrown or executed inplace. See with_handlers()
+#' # documentation for more on this.
+#'
+#'
+#' # Note that merely signalling a condition inheriting of "error" is
+#' # not sufficient to stop a program:
+#' cnd_signal(cnd_error("my_error"))
+#'
+#' # you need to use stop() to signal a critical condition that should
+#' # terminate the program if not handled:
+#' # stop(cnd_error("my_error"))
+new_cnd <- function(.type = NULL, ..., .msg = NULL) {
+  data <- dots_list(...)
+  if (any(names(data) %in% "message")) {
+    stop("Conditions can't have a `message` data field", call. = FALSE)
+  }
+  if (any(names2(data) == "")) {
+    stop("Conditions must have named data fields", call. = FALSE)
+  }
+  if (!is_null(.msg) && !is_string(.msg)) {
+    stop("Condition message must be a string", call. = FALSE)
+  }
+
+  cnd <- c(list(message = .msg), data)
+  structure(cnd, class = c(.type, "condition"))
+}
+
+#' @rdname new_cnd
+#' @export
+cnd_error <- function(.type = NULL, ..., .msg = NULL) {
+  new_cnd(c(.type, "error"), ..., .msg = .msg)
+}
+#' @rdname new_cnd
+#' @export
+cnd_warning <- function(.type = NULL, ..., .msg = NULL) {
+  new_cnd(c(.type, "warning"), ..., .msg = .msg)
+}
+#' @rdname new_cnd
+#' @export
+cnd_message <- function(.type = NULL, ..., .msg = NULL) {
+  new_cnd(c(.type, "message"), ..., .msg = .msg)
+}
+
+#' Is object a condition?
+#' @param x An object to test.
+is_condition <- function(x) {
+  inherits(x, "condition")
+}
+
+#' Signal a condition
+#'
+#' Signal a condition to handlers that have been established on the
+#' stack. Conditions signalled with `cnd_signal()` are assumed to be
+#' benign. Control flow can resume normally once the conditions has
+#' been signalled (if no handler jumped somewhere else on the
+#' evaluation stack). On the other hand, `cnd_abort()` treats the
+#' condition as critical and will jump out of the distressed call
+#' frame (see [rst_abort()]), unless a handler can deal with the
+#' condition.
+#'
+#' If `.critical` is `FALSE`, this function has no side effects beyond
+#' calling handlers. In particular, execution will continue normally
+#' after signalling the condition (unless a handler jumped somewhere
+#' else via [rst_jump()] or by being [exiting()]). If `.critical` is
+#' `TRUE`, the condition is signalled via [base::stop()] and the
+#' program will terminate if no handler dealt with the condition by
+#' jumping out of the distressed call frame.
+#'
+#' [inplace()] handlers are called in turn when they decline to handle
+#' the condition by returning normally. However, it is sometimes
+#' useful for an inplace handler to produce a side effect (signalling
+#' another condition, displaying a message, logging something, etc),
+#' prevent the condition from being passed to other handlers, and
+#' resume execution from the place where the condition was
+#' signalled. The easiest way to accomplish this is by jumping to a
+#' restart point (see [with_restarts()]) established by the signalling
+#' function. If `.mufflable` is `TRUE`, a muffle restart is
+#' established. This allows inplace handler to muffle a signalled
+#' condition. See [rst_muffle()] to jump to a muffling restart, and
+#' the `muffle` argument of [inplace()] for creating a muffling
+#' handler.
+#'
+#' @inheritParams new_cnd
+#' @param .cnd Either a condition object (see [new_cnd()]), or the
+#'   name of a s3 class from which a new condition will be created.
+#' @param .msg A string to override the condition's default message.
+#' @param .call Whether to display the call of the frame in which the
+#'   condition is signalled. If `TRUE`, the call is stored in the
+#'   `call` field of the condition object: this field is displayed by
+#'   R when an error is issued. The call information is also stored in
+#'   the `.call` field in all cases.
+#' @param .mufflable Whether to signal the condition with a muffling
+#'   restart. This is useful to let [inplace()] handlers muffle a
+#'   condition. It stops the condition from being passed to other
+#'   handlers when the inplace handler did not jump elsewhere. `TRUE`
+#'   by default for benign conditions, but `FALSE` for critical ones,
+#'   since in those cases execution should probably not be allowed to
+#'   continue normally.
+#' @seealso [abort()], [warn()] and [inform()] for signalling typical
+#'   R conditions. See [with_handlers()] for establishing condition
+#'   handlers.
+#' @export
+#' @examples
+#' # Creating a condition of type "foo"
+#' cnd <- new_cnd("foo")
+#'
+#' # If no handler capable of dealing with "foo" is established on the
+#' # stack, signalling the condition has no effect:
+#' cnd_signal(cnd)
+#'
+#' # To learn more about establishing condition handlers, see
+#' # documentation for with_handlers(), exiting() and inplace():
+#' with_handlers(cnd_signal(cnd),
+#'   foo = inplace(function(c) cat("side effect!\n"))
+#' )
+#'
+#'
+#' # By default, cnd_signal() creates a muffling restart which allows
+#' # inplace handlers to prevent a condition from being passed on to
+#' # other handlers and to resume execution:
+#' undesirable_handler <- inplace(function(c) cat("please don't call me\n"))
+#' muffling_handler <- inplace(function(c) {
+#'   cat("muffling foo...\n")
+#'   rst_muffle(c)
+#' })
+#'
+#' with_handlers(foo = undesirable_handler,
+#'   with_handlers(foo = muffling_handler, {
+#'     cnd_signal("foo")
+#'     "return value"
+#'   }))
+#'
+#'
+#' # You can signal a critical condition with cnd_abort(). Unlike
+#' # cnd_signal() which has no side effect besides signalling the
+#' # condition, cnd_abort() makes the program terminate with an error
+#' # unless a handler can deal with the condition:
+#' \dontrun{
+#' cnd_abort(cnd)
+#' }
+#'
+#' # If you don't specify a .msg or .call, the default message/call
+#' # (supplied to new_cnd()) are displayed. Otherwise, the ones
+#' # supplied to cnd_abort() and cnd_signal() take precedence:
+#' \dontrun{
+#' critical <- new_cnd("my_error",
+#'   .msg = "default 'my_error' msg",
+#'   .call = quote(default(call))
+#' )
+#' cnd_abort(critical)
+#' cnd_abort(critical, .msg = "overridden msg")
+#'
+#' fn <- function(...) {
+#'   cnd_abort(critical, .call = TRUE)
+#' }
+#' fn(arg = foo(bar))
+#' }
+#'
+#' # Note that by default a condition signalled with cnd_abort() does
+#' # not have a muffle restart. That is because in most cases,
+#' # execution should not continue after signalling a critical
+#' # condition.
+cnd_signal <- function(.cnd, ..., .msg = NULL, .call = FALSE,
+                       .mufflable = TRUE) {
+  cnd <- make_cnd(.cnd, ..., .msg = .msg, .call = sys.call(-1), .show_call = .call)
+  cnd_signal_(cnd, base::signalCondition, .mufflable)
+}
+#' @rdname cnd_signal
+#' @export
+cnd_abort <- function(.cnd, ..., .msg = NULL, .call = FALSE,
+                      .mufflable = FALSE) {
+  cnd <- make_cnd(.cnd, ..., .msg = .msg, .call = sys.call(-1), .show_call = .call)
+  cnd_signal_(cnd, base::stop, .mufflable)
+}
+
+make_cnd <- function(.cnd, ..., .msg, .call, .show_call) {
+  if (is_scalar_character(.cnd)) {
+    .cnd <- new_cnd(.cnd, ...)
+  } else {
+    stopifnot(is_condition(.cnd))
+  }
+
+  # Override default field if supplied
+  .cnd$message <- .msg %||% .cnd$message %||% ""
+
+  # The `call` field is displayed by stop().
+  # But record call in `.call` in all cases.
+  .cnd$.call <- .call
+  if (.show_call) {
+    .cnd$call <- .cnd$.call
+  }
+
+  .cnd
+}
+cnd_signal_ <- function(cnd, signal, mufflable) {
+  if (mufflable) {
+    class(cnd) <- c("mufflable", class(cnd))
+    withRestarts(signal(cnd), muffle = function(...) NULL)
+  } else {
+    signal(cnd)
+  }
+}
+
+#' Signal an error, warning, or message
+#'
+#' These functions are equivalent to base functions [base::stop()],
+#' [base::warning()] and [base::message()], but the `type` argument
+#' makes it easy to create subclassed conditions. They also don't
+#' include call information by default. This saves you from typing
+#' `call. = FALSE` to make error messages cleaner within package
+#' functions.
+#'
+#' Like `stop()` and [cnd_abort()], `abort()` signals a critical
+#' condition and interrupts execution by jumping to top level (see
+#' [rst_abort()]). Only a handler of the relevant type can prevent
+#' this jump by making another jump to a different target on the stack
+#' (see [with_handlers()]).
+#'
+#' `warn()` and `inform()` both have the side effect of displaying a
+#' message. These messages will not be displayed if a handler
+#' transfers control. Transfer can be achieved by establishing an
+#' exiting handler that transfers control to [with_handlers()]). In
+#' this case, the current function stops and execution resumes at the
+#' point where handlers were established.
+#'
+#' Since it is often desirable to continue normally after a message or
+#' warning, both `warn()` and `inform()` (and their base R equivalent)
+#' establish a muffle restart where handlers can jump to prevent the
+#' message from being displayed. Execution resumes normally after
+#' that. See [rst_muffle()] to jump to a muffling restart, and the
+#' `muffle` argument of [inplace()] for creating a muffling handler.
+#'
+#' @param msg A message to display.
+#' @param type Subclass of the condition to signal.
+#' @param call Whether to display the call.
+#'
+#' @export
+abort <- function(msg, type = NULL, call = FALSE) {
+  cnd <- cnd_error(type, .msg = msg, .call = sys.call(-1))
+  if (call) {
+    cnd$call <- cnd$.call
+  }
+  stop(cnd)
+}
+#' @rdname abort
+#' @export
+warn <- function(msg, type = NULL, call = FALSE) {
+  cnd <- cnd_warning(type, .msg = msg, .call = sys.call(-1))
+  if (call) {
+    cnd$call <- cnd$.call
+  }
+  warning(cnd)
+}
+#' @rdname abort
+#' @export
+inform <- function(msg, type = NULL, call = FALSE) {
+  msg <- paste0(msg, "\n")
+  cnd <- cnd_message(type, .msg = msg, .call = sys.call(-1))
+  if (call) {
+    cnd$call <- cnd$.call
+  }
+  message(cnd)
+}
diff --git a/R/compat-lazyeval.R b/R/compat-lazyeval.R
new file mode 100644
index 0000000..73b7fc2
--- /dev/null
+++ b/R/compat-lazyeval.R
@@ -0,0 +1,90 @@
+# nocov start - compat-lazyeval (last updated: rlang 0.1.9000)
+
+# This file serves as a reference for compatibility functions for lazyeval.
+# Please find the most recent version in rlang's repository.
+
+
+warn_underscored <- function() {
+  return(NULL)
+  warn(paste(
+    "The underscored versions are deprecated in favour of",
+    "tidy evaluation idioms. Please see the documentation",
+    "for `quo()` in rlang"
+  ))
+}
+warn_text_se <- function() {
+  return(NULL)
+  warn("Text parsing is deprecated, please supply an expression or formula")
+}
+
+compat_lazy <- function(lazy, env = caller_env(), warn = TRUE) {
+  if (warn) warn_underscored()
+
+  if (missing(lazy)) {
+    return(quo())
+  }
+
+  coerce_type(lazy, "a quosure",
+    formula = as_quosure(lazy, env),
+    symbol = ,
+    language = new_quosure(lazy, env),
+    string = ,
+    character = {
+      if (warn) warn_text_se()
+      parse_quosure(lazy[[1]], env)
+    },
+    logical = ,
+    integer = ,
+    double = {
+      if (length(lazy) > 1) {
+        warn("Truncating vector to length 1")
+        lazy <- lazy[[1]]
+      }
+      new_quosure(lazy, env)
+    },
+    list =
+      coerce_class(lazy, "a quosure",
+        lazy = new_quosure(lazy$expr, lazy$env)
+      )
+  )
+}
+
+compat_lazy_dots <- function(dots, env, ..., .named = FALSE) {
+  if (missing(dots)) {
+    dots <- list()
+  }
+  if (inherits(dots, c("lazy", "formula"))) {
+    dots <- list(dots)
+  } else {
+    dots <- unclass(dots)
+  }
+  dots <- c(dots, list(...))
+
+  warn <- TRUE
+  for (i in seq_along(dots)) {
+    dots[[i]] <- compat_lazy(dots[[i]], env, warn)
+    warn <- FALSE
+  }
+
+  named <- have_name(dots)
+  if (.named && any(!named)) {
+    nms <- map_chr(dots[!named], f_text)
+    names(dots)[!named] <- nms
+  }
+
+  names(dots) <- names2(dots)
+  dots
+}
+
+compat_as_lazy <- function(quo) {
+  structure(class = "lazy", list(
+    expr = f_rhs(quo),
+    env = f_env(quo)
+  ))
+}
+compat_as_lazy_dots <- function(...) {
+  structure(class = "lazy_dots", map(quos(...), compat_as_lazy))
+}
+
+
+# nocov end
diff --git a/R/compat-oldrel.R b/R/compat-oldrel.R
new file mode 100644
index 0000000..1c9cf93
--- /dev/null
+++ b/R/compat-oldrel.R
@@ -0,0 +1,41 @@
+# nocov start - compat-oldrel (last updated: rlang 0.1.2)
+
+# This file serves as a reference for compatibility functions for old
+# versions of R. Please find the most recent version in rlang's
+# repository.
+
+
+# R 3.2.0 ------------------------------------------------------------
+
+if (getRversion() < "3.2.0") {
+
+  dir_exists <- function(path) {
+    !identical(path, "") && file.exists(paste0(path, .Platform$file.sep))
+  }
+  dir.exists <- function(paths) {
+    vapply(paths, dir_exists, logical(1))
+  }
+
+  names <- function(x) {
+    if (is.environment(x)) {
+      return(ls(x, all.names = TRUE))
+    } else {
+      return(base::names(x))
+    }
+
+    # So R CMD check on old versions of R sees a generic, since we
+    # declare a names() method for dictionary objects
+    UseMethod("names")
+  }
+
+  trimws <- function(x, which = c("both", "left", "right")) {
+    switch(match.arg(which),
+      left = sub("^[ \t\r\n]+", "", x, perl = TRUE),
+      right = sub("[ \t\r\n]+$", "", x, perl = TRUE),
+      both = trimws(trimws(x, "left"), "right")
+    )
+  }
+
+}
+
+# nocov end
diff --git a/R/compat-purrr.R b/R/compat-purrr.R
new file mode 100644
index 0000000..7ec8f41
--- /dev/null
+++ b/R/compat-purrr.R
@@ -0,0 +1,162 @@
+# nocov start - compat-purrr (last updated: rlang 0.1.9000)
+
+# This file serves as a reference for compatibility functions for
+# purrr. They are not drop-in replacements but allow a similar style
+# of programming. This is useful in cases where purrr is too heavy a
+# package to depend on. Please find the most recent version in rlang's
+# repository.
+
+map <- function(.x, .f, ...) {
+  lapply(.x, .f, ...)
+}
+map_mold <- function(.x, .f, .mold, ...) {
+  out <- vapply(.x, .f, .mold, ..., USE.NAMES = FALSE)
+  names(out) <- names(.x)
+  out
+}
+map_lgl <- function(.x, .f, ...) {
+  map_mold(.x, .f, logical(1), ...)
+}
+map_int <- function(.x, .f, ...) {
+  map_mold(.x, .f, integer(1), ...)
+}
+map_dbl <- function(.x, .f, ...) {
+  map_mold(.x, .f, double(1), ...)
+}
+map_chr <- function(.x, .f, ...) {
+  map_mold(.x, .f, character(1), ...)
+}
+map_cpl <- function(.x, .f, ...) {
+  map_mold(.x, .f, complex(1), ...)
+}
+
+pluck <- function(.x, .f) {
+  map(.x, `[[`, .f)
+}
+pluck_lgl <- function(.x, .f) {
+  map_lgl(.x, `[[`, .f)
+}
+pluck_int <- function(.x, .f) {
+  map_int(.x, `[[`, .f)
+}
+pluck_dbl <- function(.x, .f) {
+  map_dbl(.x, `[[`, .f)
+}
+pluck_chr <- function(.x, .f) {
+  map_chr(.x, `[[`, .f)
+}
+pluck_cpl <- function(.x, .f) {
+  map_cpl(.x, `[[`, .f)
+}
+
+map2 <- function(.x, .y, .f, ...) {
+  Map(.f, .x, .y, ...)
+}
+map2_lgl <- function(.x, .y, .f, ...) {
+  as.vector(map2(.x, .y, .f, ...), "logical")
+}
+map2_int <- function(.x, .y, .f, ...) {
+  as.vector(map2(.x, .y, .f, ...), "integer")
+}
+map2_dbl <- function(.x, .y, .f, ...) {
+  as.vector(map2(.x, .y, .f, ...), "double")
+}
+map2_chr <- function(.x, .y, .f, ...) {
+  as.vector(map2(.x, .y, .f, ...), "character")
+}
+map2_cpl <- function(.x, .y, .f, ...) {
+  as.vector(map2(.x, .y, .f, ...), "complex")
+}
+
+args_recycle <- function(args) {
+  lengths <- map_int(args, length)
+  n <- max(lengths)
+
+  stopifnot(all(lengths == 1L | lengths == n))
+  to_recycle <- lengths == 1L
+  args[to_recycle] <- map(args[to_recycle], function(x) rep.int(x, n))
+
+  args
+}
+pmap <- function(.l, .f, ...) {
+  args <- args_recycle(.l)
+  do.call("mapply", c(
+    FUN = list(quote(.f)),
+    args, MoreArgs = quote(list(...)),
+    SIMPLIFY = FALSE, USE.NAMES = FALSE
+  ))
+}
+
+probe <- function(.x, .p, ...) {
+  if (is_logical(.p)) {
+    stopifnot(length(.p) == length(.x))
+    .p
+  } else {
+    map_lgl(.x, .p, ...)
+  }
+}
+
+keep <- function(.x, .f, ...) {
+  .x[probe(.x, .f, ...)]
+}
+discard <- function(.x, .p, ...) {
+  sel <- probe(.x, .p, ...)
+  .x[is.na(sel) | !sel]
+}
+map_if <- function(.x, .p, .f, ...) {
+  matches <- probe(.x, .p)
+  .x[matches] <- map(.x[matches], .f, ...)
+  .x
+}
+
+compact <- function(.x) {
+  Filter(length, .x)
+}
+
+transpose <- function(.l) {
+  inner_names <- names(.l[[1]])
+  if (is.null(inner_names)) {
+    fields <- seq_along(.l[[1]])
+  } else {
+    fields <- set_names(inner_names)
+  }
+
+  map(fields, function(i) {
+    map(.l, .subset2, i)
+  })
+}
+
+every <- function(.x, .p, ...) {
+  for (i in seq_along(.x)) {
+    if (!rlang::is_true(.p(.x[[i]], ...))) return(FALSE)
+  }
+  TRUE
+}
+some <- function(.x, .p, ...) {
+  for (i in seq_along(.x)) {
+    if (rlang::is_true(.p(.x[[i]], ...))) return(TRUE)
+  }
+  FALSE
+}
+negate <- function(.p) {
+  function(...) !.p(...)
+}
+
+reduce <- function(.x, .f, ..., .init) {
+  f <- function(x, y) .f(x, y, ...)
+  Reduce(f, .x, init = .init)
+}
+reduce_right <- function(.x, .f, ..., .init) {
+  f <- function(x, y) .f(y, x, ...)
+  Reduce(f, .x, init = .init, right = TRUE)
+}
+accumulate <- function(.x, .f, ..., .init) {
+  f <- function(x, y) .f(x, y, ...)
+  Reduce(f, .x, init = .init, accumulate = TRUE)
+}
+accumulate_right <- function(.x, .f, ..., .init) {
+  f <- function(x, y) .f(y, x, ...)
+  Reduce(f, .x, init = .init, right = TRUE, accumulate = TRUE)
+}
+
+# nocov end
diff --git a/R/dictionary.R b/R/dictionary.R
new file mode 100644
index 0000000..f5d051f
--- /dev/null
+++ b/R/dictionary.R
@@ -0,0 +1,168 @@
+#' Create a dictionary
+#'
+#' @description
+#'
+#' Dictionaries are a concept of types modelled after R
+#' environments. Dictionaries are containers of R objects that:
+#'
+#' - Contain uniquely named objects.
+#'
+#' - Can only be indexed by name. They must implement the extracting
+#'   operators `$` and `[[`. The latter returns an error when indexed
+#'   by position because dictionaries are not vectors (they are
+#'   unordered).
+#'
+#' - Report a clear error message when asked to extract a name that
+#'   does not exist. This error message can be customised with the
+#'   `lookup_msg` constructor argument.
+#'
+#' @details
+#'
+#' Dictionaries are used within the tidy evaluation framework for
+#' creating pronouns that can be explicitly referred to from captured
+#' code. See [eval_tidy()].
+#'
+#' @param x An object for which you want to find associated data.
+#' @param lookup_msg An error message when your data source is
+#'   accessed inappropriately (by position rather than name).
+#' @param read_only Whether users can replace elements of the
+#'   dictionary.
+#' @name dictionary
+#' @export
+as_dictionary <- function(x, lookup_msg = NULL, read_only = FALSE) {
+  UseMethod("as_dictionary")
+}
+#' @export
+as_dictionary.default <- function(x, lookup_msg = NULL, read_only = FALSE) {
+  x <- discard_unnamed(x)
+  if (!is_dictionaryish(x)) {
+    abort("Data source must be a dictionary")
+  }
+  new_dictionary(as.list(x), lookup_msg, read_only)
+}
+#' @export
+as_dictionary.dictionary <- function(x, lookup_msg = NULL, read_only = FALSE) {
+  dict <- unclass_dict(x)
+  dict$lookup_msg <- lookup_msg %||% x$lookup_msg
+  dict$read_only <- read_only
+  set_attrs(dict, class = class(x))
+}
+#' @export
+as_dictionary.NULL <- function(x, lookup_msg = NULL, read_only = FALSE) {
+  new_dictionary(list(), lookup_msg, read_only)
+}
+#' @export
+as_dictionary.environment <- function(x, lookup_msg = NULL, read_only = FALSE) {
+  lookup_msg <- lookup_msg %||% "Object `%s` not found in environment"
+  new_dictionary(x, lookup_msg, read_only)
+}
+#' @export
+as_dictionary.data.frame <- function(x, lookup_msg = NULL, read_only = FALSE) {
+  lookup_msg <- lookup_msg %||% "Column `%s` not found in data"
+  new_dictionary(x, lookup_msg, read_only)
+}
+
+new_dictionary <- function(x, lookup_msg, read_only) {
+  .Call(rlang_new_dictionary, x, lookup_msg, read_only)
+}
+
+#' @rdname dictionary
+#' @export
+is_dictionary <- function(x) {
+  inherits(x, "dictionary")
+}
+
+#' @export
+`$.dictionary` <- function(x, name) {
+  src <- .subset2(x, "src")
+  if (!has_binding(src, name)) {
+    abort(sprintf(.subset2(x, "lookup_msg"), name))
+  }
+  src[[name]]
+}
+#' @export
+`[[.dictionary` <- function(x, i, ...) {
+  if (!is_string(i)) {
+    abort("Must subset with a string")
+  }
+  src <- .subset2(x, "src")
+  if (!has_binding(src, i)) {
+    abort(sprintf(.subset2(x, "lookup_msg"), i))
+  }
+  src[[i, ...]]
+}
+
+#' @export
+`$<-.dictionary` <- function(x, i, value) {
+  dict <- unclass_dict(x)
+
+  if (dict$read_only) {
+    abort("Can't modify read-only dictionary")
+  }
+
+  dict$src[[i]] <- value
+  set_attrs(dict, class = class(x))
+}
+#' @export
+`[[<-.dictionary` <- function(x, i, value) {
+  dict <- unclass_dict(x)
+
+  if (dict$read_only) {
+    abort("Can't modify read-only dictionary")
+  }
+  if (!is_string(i)) {
+    abort("Must subset with a string")
+  }
+
+  dict$src[[i]] <- value
+  set_attrs(dict, class = class(x))
+}
+
+#' @export
+names.dictionary <- function(x) {
+  names(unclass(x)$src)
+}
+#' @export
+length.dictionary <- function(x) {
+  length(unclass(x)$src)
+}
+
+has_binding <- function(x, name) {
+  UseMethod("has_binding")
+}
+#' @export
+has_binding.default <- function(x, name) {
+  name %in% names(x)
+}
+#' @export
+has_binding.environment <- function(x, name) {
+  env_has(x, name)
+}
+
+#' @export
+print.dictionary <- function(x, ...) {
+  src <- unclass_dict(x)$src
+  objs <- glue_countable(length(src), "object")
+  cat(paste0("# A dictionary: ", objs, "\n"))
+  invisible(x)
+}
+#' @importFrom utils str
+#' @export
+str.dictionary <- function(object, ...) {
+  str(unclass_dict(object)$src, ...)
+}
+
+glue_countable <- function(n, str) {
+  if (n == 1) {
+    paste0(n, " ", str)
+  } else {
+    paste0(n, " ", str, "s")
+  }
+}
+# Unclassing before print() or str() is necessary because default
+# methods index objects with integers
+unclass_dict <- function(x) {
+  i <- match("dictionary", class(x))
+  class(x) <- class(x)[-i]
+  x
+}
diff --git a/R/dots.R b/R/dots.R
new file mode 100644
index 0000000..a1ee436
--- /dev/null
+++ b/R/dots.R
@@ -0,0 +1,260 @@
+#' Extract dots with splicing semantics
+#'
+#' These functions evaluate all arguments contained in `...` and
+#' return them as a list. They both splice their arguments if they
+#' qualify for splicing. See [ll()] for information about splicing
+#' and below for the kind of arguments that qualify for splicing.
+#'
+#' `dots_list()` has _explicit splicing semantics_: it splices lists
+#' that are explicitly marked for [splicing][ll] with the
+#' [splice()] adjective. `dots_splice()` on the other hand has _list
+#' splicing semantics_: in addition to lists marked explicitly for
+#' splicing, [bare][is_bare_list] lists are spliced as well.
+#'
+#' @inheritParams dots_values
+#' @param ... Arguments with explicit (`dots_list()`) or list
+#'   (`dots_splice()`) splicing semantics. The contents of spliced
+#'   arguments are embedded in the returned list.
+#' @return A list of arguments. This list is always named: unnamed
+#'   arguments are named with the empty string `""`.
+#' @seealso [exprs()] for extracting dots without evaluation.
+#' @export
+#' @examples
+#' # Compared to simply using list(...) to capture dots, dots_list()
+#' # splices explicitly:
+#' x <- list(1, 2)
+#' dots_list(!!! x, 3)
+#'
+#' # Unlike dots_splice(), it doesn't splice bare lists:
+#' dots_list(x, 3)
+#'
+#' # Splicing is also helpful to workaround exact and partial matching
+#' # of arguments. Let's create a function taking named arguments and
+#' # dots:
+#' fn <- function(data, ...) {
+#'   dots_list(...)
+#' }
+#'
+#' # You normally cannot pass an argument named `data` through the dots
+#' # as it will match `fn`'s `data` argument. The splicing syntax
+#' # provides a workaround:
+#' fn(some_data, !!! list(data = letters))
+dots_list <- function(..., .ignore_empty = c("trailing", "none", "all")) {
+  dots <- dots_values(..., .ignore_empty = .ignore_empty)
+  dots <- .Call(rlang_squash, dots, "list", is_spliced, 1L)
+  names(dots) <- names2(dots)
+  dots
+}
+#' @rdname dots_list
+#' @export
+#' @examples
+#'
+#' # dots_splice() splices lists marked with splice() as well as bare
+#' # lists:
+#' x <- list(1, 2)
+#' dots_splice(!!! x, 3)
+#' dots_splice(x, 3)
+dots_splice <- function(..., .ignore_empty = c("trailing", "none", "all")) {
+  dots <- dots_values(..., .ignore_empty = .ignore_empty)
+  dots <- .Call(rlang_squash, dots, "list", is_spliced_bare, 1L)
+  names(dots) <- names2(dots)
+  dots
+}
+
+#' Evaluate dots with preliminary splicing
+#'
+#' This is a tool for advanced users. It captures dots, processes
+#' unquoting and splicing operators, and evaluates them. Unlike
+#' [dots_list()] and [dots_splice()], it does not flatten spliced
+#' objects. They are merely attributed a `spliced` class (see
+#' [splice()]). You can process spliced objects manually, perhaps with
+#' a custom predicate (see [flatten_if()]).
+#'
+#' @param ... Arguments to evaluate and process splicing operators.
+#' @param .ignore_empty Whether to ignore empty arguments. Can be one
+#'   of `"trailing"`, `"none"`, `"all"`. If `"trailing"`, only the
+#'   last argument is ignored if it is empty.
+#' @export
+#' @examples
+#' dots <- dots_values(!!! list(1))
+#' dots
+#'
+#' # Flatten the spliced objects:
+#' flatten_if(dots, is_spliced)
+dots_values <- function(..., .ignore_empty = c("trailing", "none", "all")) {
+  dots <- dots_capture(..., `__quosured` = FALSE)
+  dots <- dots_clean_empty(dots, function(x) is_missing(x$expr), .ignore_empty)
+  if (is_null(dots)) {
+    dots <- list(...)
+    set_names(dots, names2(dots))
+  } else {
+    map(dots, function(dot) eval_bare(dot$expr, dot$env))
+  }
+}
+
+dots_clean_empty <- function(dots, is_empty, ignore_empty) {
+  n_dots <- length(dots)
+
+  if (n_dots) {
+    which <- match.arg(ignore_empty, c("trailing", "none", "all"))
+    switch(which,
+      trailing =
+        if (is_empty(dots[[n_dots]])) {
+          dots[[n_dots]] <- NULL
+        },
+      all = {
+        dots <- discard(dots, is_empty)
+      }
+    )
+  }
+
+  dots
+}
+
+
+#' @rdname quosures
+#' @export
+dots_definitions <- function(..., .named = FALSE) {
+  dots <- dots_enquose(..., `__interp_lhs` = FALSE)
+  if (.named) {
+    width <- quo_names_width(.named)
+    dots <- quos_auto_name(dots, width)
+  }
+
+  is_def <- map_lgl(dots, function(dot) is_definition(f_rhs(dot)))
+  defs <- map(dots[is_def], as_definition)
+
+  list(dots = dots[!is_def], defs = defs)
+}
+as_definition <- function(def) {
+  # The definition comes wrapped in a quosure
+  env <- f_env(def)
+  def <- f_rhs(def)
+
+  list(
+    lhs = new_quosure(f_lhs(def), env),
+    rhs = new_quosure(f_rhs(def), env)
+  )
+}
+
+is_dot_symbol <- function(x) {
+  is_symbol(x) && is_dot_nm(as.character(x))
+}
+is_dot_nm <- function(nm) {
+  grepl("^\\.\\.[0-9]+$", nm)
+}
+
+#' How many arguments are currently forwarded in dots?
+#'
+#' This returns the number of arguments currently forwarded in `...`
+#' as an integer.
+#'
+#' @param ... Forwarded arguments.
+#' @export
+#' @examples
+#' fn <- function(...) dots_n(..., baz)
+#' fn(foo, bar)
+dots_n <- function(...) {
+  nargs()
+}
+
+dots_capture <- function(..., `__interp_lhs` = TRUE, `__quosured` = TRUE) {
+  info <- captureDots(strict = `__quosured`)
+
+  # No interpolation because dots were already evaluated
+  if (is_null(info)) {
+    return(NULL)
+  }
+
+  dots <- map(info, dot_interp, quosured = `__quosured`)
+
+  # Flatten possibly spliced dots
+  dots <- unlist(dots, FALSE) %||% set_names(list())
+
+  if (`__interp_lhs`) {
+    dots <- dots_interp_lhs(dots)
+  }
+
+  dots
+}
+dot_interp <- function(dot, quosured = TRUE) {
+  if (is_missing(dot$expr)) {
+    return(list(dot))
+  }
+  env <- dot$env
+  expr <- dot$expr
+
+  # Allow unquote-splice in dots
+  if (is_splice(expr)) {
+    dots <- call("alist", expr)
+    dots <- .Call(rlang_interp, dots, env, quosured)
+    dots <- eval_bare(dots)
+    map(dots, function(expr) list(expr = expr, env = env))
+  } else {
+    expr <- .Call(rlang_interp, expr, env, quosured)
+    list(list(expr = expr, env = env))
+  }
+}
+
+dots_enquose <- function(..., `__interp_lhs` = TRUE) {
+  dots <- dots_capture(..., `__interp_lhs` = `__interp_lhs`)
+  map(dots, dot_enquose)
+}
+dot_enquose <- function(dot) {
+  if (is_missing(dot$expr)) {
+    new_quosure(missing_arg(), empty_env())
+  } else {
+    forward_quosure(dot$expr, dot$env)
+  }
+}
+
+is_bang <- function(expr) {
+  is_lang(expr) && identical(node_car(expr), quote(`!`))
+}
+is_splice <- function(expr) {
+  if (!is.call(expr)) {
+    return(FALSE)
+  }
+
+  if (identical(node_car(expr), quote(UQS)) || identical(node_car(expr), quote(rlang::UQS))) {
+    return(TRUE)
+  }
+
+  if (is_bang(expr) && is_bang(node_cadr(expr)) && is_bang(node_cadr(node_cadr(expr)))) {
+    return(TRUE)
+  }
+
+  FALSE
+}
+
+dots_interp_lhs <- function(dots) {
+  nms <- names2(dots)
+  defs <- map_lgl(dots, function(dot) is_definition(dot$expr))
+
+  for (i in which(defs)) {
+    info <- dot_interp_lhs(nms[[i]], dots[[i]])
+    dots[[i]] <- info$dot
+
+    if (!is_null(info$name)) {
+      nms[[i]] <- info$name
+    }
+  }
+
+  names(dots) <- nms
+  dots
+}
+dot_interp_lhs <- function(name, dot) {
+  if (!is_null(name) && name != "") {
+    warn("name ignored because a LHS was supplied")
+  }
+
+  lhs <- .Call(rlang_interp, f_lhs(dot$expr), dot$env, FALSE)
+  if (is_symbol(lhs)) {
+    lhs <- as_string(lhs)
+  } else if (!is_string(lhs)) {
+    abort("LHS must be a name or string")
+  }
+
+  dot <- list(expr = f_rhs(dot$expr), env = dot$env)
+  list(name = lhs, dot = dot)
+}
diff --git a/R/env.R b/R/env.R
new file mode 100644
index 0000000..c33a7e2
--- /dev/null
+++ b/R/env.R
@@ -0,0 +1,1053 @@
+#' Create a new environment
+#'
+#' @description
+#'
+#' These functions create new environments.
+#'
+#' * `env()` always creates a child of the current environment.
+#'
+#' * `child_env()` lets you specify a parent (see section on
+#'   inheritance).
+#'
+#' * `new_environment()` creates a child of the empty environment. It
+#'   is useful e.g. for using environments as containers of data
+#'   rather than as part of a scope hierarchy.
+#'
+#' @section Environments as objects:
+#'
+#' Environments are containers of uniquely named objects. Their most
+#' common use is to provide a scope for the evaluation of R
+#' expressions. Not all languages have first class environments,
+#' i.e. can manipulate scope as regular objects. Reification of scope
+#' is one of the most powerful feature of R as it allows you to change
+#' what objects a function or expression sees when it is evaluated.
+#'
+#' Environments also constitute a data structure in their own
+#' right. They are [dictionaries][dictionary] of uniquely named
+#' objects, subsettable by name and modifiable by reference. This
+#' latter property (see section on reference semantics) is especially
+#' useful for creating mutable OO systems (cf the [R6
+#' package](https://github.com/wch/R6) and the [ggproto
+#' system](http://ggplot2.tidyverse.org/articles/extending-ggplot2.html)
+#' for extending ggplot2).
+#'
+#' @section Inheritance:
+#'
+#' All R environments (except the [empty environment][empty_env]) are
+#' defined with a parent environment. An environment and its
+#' grandparents thus form a linear hierarchy that is the basis for
+#' [lexical
+#' scoping](https://en.wikipedia.org/wiki/Scope_(computer_science)) in
+#' R. When R evaluates an expression, it looks up symbols in a given
+#' environment. If it cannot find these symbols there, it keeps
+#' looking them up in parent environments. This way, objects defined
+#' in child environments have precedence over objects defined in
+#' parent environments.
+#'
+#' The ability of overriding specific definitions is used in the
+#' tidyeval framework to create powerful domain-specific grammars. A
+#' common use of overscoping is to put data frame columns in
+#' scope. See [as_overscope()] for technical details.
+#'
+#' @section Reference semantics:
+#'
+#' Unlike regular objects such as vectors, environments are an
+#' [uncopyable][is_copyable()] object type. This means that if you
+#' have multiple references to a given environment (by assigning the
+#' environment to another symbol with `<-` or passing the environment
+#' as argument to a function), modifying the bindings of one of those
+#' references changes all other references as well.
+#'
+#' @param ...,data Named values. The dots have [explicit splicing
+#'   semantics][dots_list].
+#' @param .parent A parent environment. Can be an object supported by
+#'   [as_env()].
+#' @seealso `scoped_env`, [env_has()], [env_bind()].
+#' @export
+#' @examples
+#' # env() creates a new environment which has the current environment
+#' # as parent
+#' env <- env(a = 1, b = "foo")
+#' env$b
+#' identical(env_parent(env), get_env())
+#'
+#'
+#' # child_env() lets you specify a parent:
+#' child <- child_env(env, c = "bar")
+#' identical(env_parent(child), env)
+#'
+#' # This child environment owns `c` but inherits `a` and `b` from `env`:
+#' env_has(child, c("a", "b", "c", "d"))
+#' env_has(child, c("a", "b", "c", "d"), inherit = TRUE)
+#'
+#' # `parent` is passed to as_env() to provide handy shortcuts. Pass a
+#' # string to create a child of a package environment:
+#' child_env("rlang")
+#' env_parent(child_env("rlang"))
+#'
+#' # Or `NULL` to create a child of the empty environment:
+#' child_env(NULL)
+#' env_parent(child_env(NULL))
+#'
+#' # The base package environment is often a good default choice for a
+#' # parent environment because it contains all standard base
+#' # functions. Also note that it will never inherit from other loaded
+#' # package environments since R keeps the base package at the tail
+#' # of the search path:
+#' base_child <- child_env("base")
+#' env_has(base_child, c("lapply", "("), inherit = TRUE)
+#'
+#' # On the other hand, a child of the empty environment doesn't even
+#' # see a definition for `(`
+#' empty_child <- child_env(NULL)
+#' env_has(empty_child, c("lapply", "("), inherit = TRUE)
+#'
+#' # Note that all other package environments inherit from base_env()
+#' # as well:
+#' rlang_child <- child_env("rlang")
+#' env_has(rlang_child, "env", inherit = TRUE)     # rlang function
+#' env_has(rlang_child, "lapply", inherit = TRUE)  # base function
+#'
+#'
+#' # Both env() and child_env() take dots with explicit splicing:
+#' objs <- list(b = "foo", c = "bar")
+#' env <- env(a = 1, !!! objs)
+#' env$c
+#'
+#' # You can also unquote names with the definition operator `:=`
+#' var <- "a"
+#' env <- env(!!var := "A")
+#' env$a
+#'
+#'
+#' # Use new_environment() to create containers with the empty
+#' # environment as parent:
+#' env <- new_environment()
+#' env_parent(env)
+#'
+#' # Like other new_ constructors, it takes an object rather than dots:
+#' new_environment(list(a = "foo", b = "bar"))
+env <- function(...) {
+  env <- new.env(parent = caller_env())
+  env_bind_impl(env, dots_list(...))
+}
+#' @rdname env
+#' @export
+child_env <- function(.parent, ...) {
+  env <- new.env(parent = as_env(.parent))
+  env_bind_impl(env, dots_list(...))
+}
+#' @rdname env
+#' @export
+new_environment <- function(data = list()) {
+  env <- new.env(parent = empty_env())
+  env_bind_impl(env, data)
+}
+
+#' Coerce to an environment
+#'
+#' `as_env()` coerces named vectors (including lists) to an
+#' environment. It first checks that `x` is a dictionary (see
+#' [is_dictionaryish()]). If supplied an unnamed string, it returns the
+#' corresponding package environment (see [pkg_env()]).
+#'
+#' If `x` is an environment and `parent` is not `NULL`, the
+#' environment is duplicated before being set a new parent. The return
+#' value is therefore a different environment than `x`.
+#'
+#' @param x An object to coerce.
+#' @param parent A parent environment, [empty_env()] by default. This
+#'   argument is only used when `x` is data actually coerced to an
+#'   environment (as opposed to data representing an environment, like
+#'   `NULL` representing the empty environment).
+#' @export
+#' @examples
+#' # Coerce a named vector to an environment:
+#' env <- as_env(mtcars)
+#'
+#' # By default it gets the empty environment as parent:
+#' identical(env_parent(env), empty_env())
+#'
+#'
+#' # With strings it is a handy shortcut for pkg_env():
+#' as_env("base")
+#' as_env("rlang")
+#'
+#' # With NULL it returns the empty environment:
+#' as_env(NULL)
+as_env <- function(x, parent = NULL) {
+  coerce_type(x, "an environment",
+    NULL = {
+      empty_env()
+    },
+    environment = {
+      x
+    },
+    string = {
+      if (length(x) > 1 || is_named(x)) {
+        return(as_env_(x, parent))
+      }
+      pkg_env(x)
+    },
+    logical = ,
+    integer = ,
+    double = ,
+    complex = ,
+    character = ,
+    raw = ,
+    list = {
+      as_env_(x, parent)
+    }
+  )
+}
+as_env_ <- function(x, parent = NULL) {
+  stopifnot(is_dictionaryish(x))
+  if (is_atomic(x)) {
+    x <- as_list(x)
+  }
+  list2env(x, parent = parent %||% empty_env())
+}
+
+#' Get parent environments
+#'
+#' @description
+#'
+#' - `env_parent()` returns the parent environment of `env` if called
+#'   with `n = 1`, the grandparent with `n = 2`, etc.
+#'
+#' - `env_tail()` searches through the parents and returns the one
+#'   which has [empty_env()] as parent.
+#'
+#' - `env_parents()` returns the list of all parents, including the
+#'   empty environment.
+#'
+#' See the section on _inheritance_ in [env()]'s documentation.
+#'
+#' @inheritParams get_env
+#' @param n The number of generations to go up.
+#' @return An environment for `env_parent()` and `env_tail()`, a list
+#'   of environments for `env_parents()`.
+#' @export
+#' @examples
+#' # Get the parent environment with env_parent():
+#' env_parent(global_env())
+#'
+#' # Or the tail environment with env_tail():
+#' env_tail(global_env())
+#'
+#' # By default, env_parent() returns the parent environment of the
+#' # current evaluation frame. If called at top-level (the global
+#' # frame), the following two expressions are equivalent:
+#' env_parent()
+#' env_parent(base_env())
+#'
+#' # This default is more handy when called within a function. In this
+#' # case, the enclosure environment of the function is returned
+#' # (since it is the parent of the evaluation frame):
+#' enclos_env <- env()
+#' fn <- set_env(function() env_parent(), enclos_env)
+#' identical(enclos_env, fn())
+env_parent <- function(env = caller_env(), n = 1) {
+  env_ <- get_env(env)
+
+  while (n > 0) {
+    if (is_empty_env(env_)) {
+      return(env_)
+    }
+    n <- n - 1
+    env_ <- parent.env(env_)
+  }
+
+  env_
+}
+#' @rdname env_parent
+#' @export
+env_tail <- function(env = caller_env()) {
+  env_ <- get_env(env)
+  next_env <- parent.env(env_)
+
+  while(!is_empty_env(next_env)) {
+    env_ <- next_env
+    next_env <- parent.env(next_env)
+  }
+
+  env_
+}
+#' @rdname env_parent
+#' @export
+env_parents <- function(env = caller_env()) {
+  out <- list_len(env_depth(env))
+
+  i <- 1L
+  while(!is_empty_env(env)) {
+    env <- env_parent(env)
+    out[[i]] <- env
+    i <- i + 1L
+  }
+
+  out
+}
+
+#' Depth of an environment chain
+#'
+#' This function returns the number of environments between `env` and
+#' the [empty environment][empty_env()], including `env`. The depth of
+#' `env` is also the number of parents of `env` (since the empty
+#' environment counts as a parent).
+#'
+#' @inheritParams get_env
+#' @return An integer.
+#' @seealso The section on inheritance in [env()] documentation.
+#' @export
+#' @examples
+#' env_depth(empty_env())
+#' env_depth(pkg_env("rlang"))
+env_depth <- function(env) {
+  env_ <- get_env(env)
+
+  n <- 0L
+  while(!is_empty_env(env_)) {
+    env_ <- env_parent(env_)
+    n <- n + 1L
+  }
+
+  n
+}
+`_empty_env` <- emptyenv()
+is_empty_env <- function(env) {
+  is_reference(env, `_empty_env`)
+}
+
+#' Get or set the environment of an object
+#'
+#' These functions dispatch internally with methods for functions,
+#' formulas and frames. If called with a missing argument, the
+#' environment of the current evaluation frame (see [ctxt_stack()]) is
+#' returned. If you call `get_env()` with an environment, it acts as
+#' the identity function and the environment is simply returned (this
+#' helps simplifying code when writing generic functions for
+#' environments).
+#'
+#' @param env An environment or an object bundling an environment,
+#'   e.g. a formula, [quosure] or [closure][is_closure].
+#' @param default The default environment in case `env` does not wrap
+#'   an environment. If `NULL` and no environment could be extracted,
+#'   an error is issued.
+#' @export
+#' @examples
+#' # Get the environment of frame objects. If no argument is supplied,
+#' # the current frame is used:
+#' fn <- function() {
+#'   list(
+#'     get_env(call_frame()),
+#'     get_env()
+#'   )
+#' }
+#' fn()
+#'
+#' # Environment of closure functions:
+#' get_env(fn)
+#'
+#' # Or of quosures or formulas:
+#' get_env(~foo)
+#' get_env(quo(foo))
+#'
+#'
+#' # Provide a default in case the object doesn't bundle an environment.
+#' # Let's create an unevaluated formula:
+#' f <- quote(~foo)
+#'
+#' # The following line would fail if run because unevaluated formulas
+#' # don't bundle an environment (they didn't have the chance to
+#' # record one yet):
+#' # get_env(f)
+#'
+#' # It is often useful to provide a default when you're writing
+#' # functions accepting formulas as input:
+#' default <- env()
+#' identical(get_env(f, default), default)
+get_env <- function(env = caller_env(), default = NULL) {
+  out <- switch_type(env,
+    environment = env,
+    definition = ,
+    formula = attr(env, ".Environment"),
+    primitive = base_env(),
+    closure = environment(env),
+    list = switch_class(env, frame = env$env)
+  )
+
+  out <- out %||% default
+
+  if (is_null(out)) {
+    type <- friendly_type(type_of(env))
+    abort(paste0("Can't extract an environment from ", type))
+  } else {
+    out
+  }
+}
+#' @rdname get_env
+#' @param new_env An environment to replace `env` with. Can be an
+#'   object handled by `get_env()`.
+#' @export
+#' @examples
+#'
+#' # set_env() can be used to set the enclosure of functions and
+#' # formulas. Let's create a function with a particular environment:
+#' env <- child_env("base")
+#' fn <- set_env(function() NULL, env)
+#'
+#' # That function now has `env` as enclosure:
+#' identical(get_env(fn), env)
+#' identical(get_env(fn), get_env())
+#'
+#' # set_env() does not work by side effect. Setting a new environment
+#' # for fn has no effect on the original function:
+#' other_env <- child_env(NULL)
+#' set_env(fn, other_env)
+#' identical(get_env(fn), other_env)
+#'
+#' # Since set_env() returns a new function with a different
+#' # environment, you'll need to reassign the result:
+#' fn <- set_env(fn, other_env)
+#' identical(get_env(fn), other_env)
+set_env <- function(env, new_env = caller_env()) {
+  switch_type(env,
+    definition = ,
+    formula = ,
+    closure = {
+      environment(env) <- get_env(new_env)
+      env
+    },
+    environment = get_env(new_env),
+    abort(paste0(
+      "Can't set environment for ", friendly_type(type_of(env)), ""
+    ))
+  )
+}
+
+mut_env_parent <- function(env, new_env) {
+  .Call(rlang_mut_env_parent, get_env(env), new_env)
+}
+`env_parent<-` <- function(x, value) {
+  .Call(rlang_mut_env_parent, get_env(x), value)
+}
+
+
+#' Bind symbols to objects in an environment
+#'
+#' @description
+#'
+#' These functions create bindings in an environment. The bindings are
+#' supplied through `...` as pairs of names and values or expressions.
+#' `env_bind()` is equivalent to evaluating a `<-` expression within
+#' the given environment. This function should take care of the
+#' majority of use cases but the other variants can be useful for
+#' specific problems.
+#'
+#' - `env_bind()` takes named _values_. The arguments are evaluated
+#'   once (with [explicit splicing][dots_list]) and bound in `.env`.
+#'   `env_bind()` is equivalent to [base::assign()].
+#'
+#' - `env_bind_fns()` takes named _functions_ and creates active
+#'   bindings in `.env`. This is equivalent to
+#'   [base::makeActiveBinding()]. An active binding executes a
+#'   function each time it is evaluated. `env_bind_fns()` takes dots
+#'   with [implicit splicing][dots_splice], so that you can supply
+#'   both named functions and named lists of functions.
+#'
+#'   If these functions are [closures][is_closure] they are lexically
+#'   scoped in the environment that they bundle. These functions can
+#'   thus refer to symbols from this enclosure that are not actually
+#'   in scope in the dynamic environment where the active bindings are
+#'   invoked. This allows creative solutions to difficult problems
+#'   (see the implementations of `dplyr::do()` methods for an
+#'   example).
+#'
+#' - `env_bind_exprs()` takes named _expressions_. This is equivalent
+#'   to [base::delayedAssign()]. The arguments are captured with
+#'   [exprs()] (and thus support call-splicing and unquoting) and
+#'   assigned to symbols in `.env`. These expressions are not
+#'   evaluated immediately but lazily. Once a symbol is evaluated, the
+#'   corresponding expression is evaluated in turn and its value is
+#'   bound to the symbol (the expressions are thus evaluated only
+#'   once, if at all).
+#'
+#' @section Side effects:
+#'
+#' Since environments have reference semantics (see relevant section
+#' in [env()] documentation), modifying the bindings of an environment
+#' produces effects in all other references to that environment. In
+#' other words, `env_bind()` and its variants have side effects.
+#'
+#' As they are called primarily for their side effects, these
+#' functions follow the convention of returning their input invisibly.
+#'
+#' @param .env An environment or an object bundling an environment,
+#'   e.g. a formula, [quosure] or [closure][is_closure]. This argument
+#'   is passed to [get_env()].
+#' @param ... Pairs of names and expressions, values or
+#'   functions. These dots support splicing (with varying semantics,
+#'   see above) and name unquoting.
+#' @return The input object `.env`, with its associated environment
+#'   modified in place, invisibly.
+#' @export
+#' @examples
+#' # env_bind() is a programmatic way of assigning values to symbols
+#' # with `<-`. We can add bindings in the current environment:
+#' env_bind(get_env(), foo = "bar")
+#' foo
+#'
+#' # Or modify those bindings:
+#' bar <- "bar"
+#' env_bind(get_env(), bar = "BAR")
+#' bar
+#'
+#' # It is most useful to change other environments:
+#' my_env <- env()
+#' env_bind(my_env, foo = "foo")
+#' my_env$foo
+#'
+#' # A useful feature is to splice lists of named values:
+#' vals <- list(a = 10, b = 20)
+#' env_bind(my_env, !!! vals, c = 30)
+#' my_env$b
+#' my_env$c
+#'
+#' # You can also unquote a variable referring to a symbol or a string
+#' # as binding name:
+#' var <- "baz"
+#' env_bind(my_env, !!var := "BAZ")
+#' my_env$baz
+#'
+#'
+#' # env_bind() and its variants are generic over formulas, quosures
+#' # and closures. To illustrate this, let's create a closure function
+#' # referring to undefined bindings:
+#' fn <- function() list(a, b)
+#' fn <- set_env(fn, child_env("base"))
+#'
+#' # This would fail if run since `a` etc are not defined in the
+#' # enclosure of fn() (a child of the base environment):
+#' # fn()
+#'
+#' # Let's define those symbols:
+#' env_bind(fn, a = "a", b = "b")
+#'
+#' # fn() now sees the objects:
+#' fn()
+env_bind <- function(.env, ...) {
+  invisible(env_bind_impl(.env, dots_list(...)))
+}
+env_bind_impl <- function(env, data) {
+  stopifnot(is_vector(data))
+  stopifnot(!length(data) || is_named(data))
+
+  nms <- names(data)
+  env_ <- get_env(env)
+
+  for (i in seq_along(data)) {
+    nm <- nms[[i]]
+    base::assign(nm, data[[nm]], envir = env_)
+  }
+
+  env
+}
+#' @rdname env_bind
+#' @param .eval_env The environment where the expressions will be
+#'   evaluated when the symbols are forced.
+#' @export
+#' @examples
+#'
+#' # env_bind_exprs() assigns expressions lazily:
+#' env <- env()
+#' env_bind_exprs(env, name = cat("forced!\n"))
+#' env$name
+#' env$name
+#'
+#' # You can unquote expressions. Note that quosures are not
+#' # supported, only raw expressions:
+#' expr <- quote(message("forced!"))
+#' env_bind_exprs(env, name = !! expr)
+#' env$name
+env_bind_exprs <- function(.env, ..., .eval_env = caller_env()) {
+  exprs <- exprs(...)
+  stopifnot(is_named(exprs))
+
+  nms <- names(exprs)
+  env_ <- get_env(.env)
+
+  for (i in seq_along(exprs)) {
+    do.call("delayedAssign", list(
+      x = nms[[i]],
+      value = exprs[[i]],
+      eval.env = .eval_env,
+      assign.env = env_
+    ))
+  }
+
+  invisible(.env)
+}
+#' @rdname env_bind
+#' @export
+#' @examples
+#'
+#' # You can create active bindings with env_bind_fns()
+#' # Let's create some bindings in the lexical enclosure of `fn`:
+#' counter <- 0
+#'
+#' # And now a function that increments the counter and returns a
+#' # string with the count:
+#' fn <- function() {
+#'   counter <<- counter + 1
+#'   paste("my counter:", counter)
+#' }
+#'
+#' # Now we create an active binding in a child of the current
+#' # environment:
+#' env <- env()
+#' env_bind_fns(env, symbol = fn)
+#'
+#' # `fn` is executed each time `symbol` is evaluated or retrieved:
+#' env$symbol
+#' env$symbol
+#' eval_bare(quote(symbol), env)
+#' eval_bare(quote(symbol), env)
+env_bind_fns <- function(.env, ...) {
+  fns <- dots_splice(...)
+  stopifnot(is_named(fns) && every(fns, is_function))
+
+  nms <- names(fns)
+  env_ <- get_env(.env)
+
+  for (i in seq_along(fns)) {
+    makeActiveBinding(nms[[i]], fns[[i]], env_)
+  }
+
+  invisible(.env)
+}
+
+#' Overscope bindings by defining symbols deeper in a scope
+#'
+#' `env_bury()` is like [env_bind()] but it creates the bindings in a
+#' new child environment. This makes sure the new bindings have
+#' precedence over old ones, without altering existing environments.
+#' Unlike `env_bind()`, this function does not have side effects and
+#' returns a new environment (or object wrapping that environment).
+#'
+#' @inheritParams env_bind
+#' @return A copy of `.env` enclosing the new environment containing
+#'   bindings to `...` arguments.
+#' @seealso [env_bind()], [env_unbind()]
+#' @export
+#' @examples
+#' orig_env <- env(a = 10)
+#' fn <- set_env(function() a, orig_env)
+#'
+#' # fn() currently sees `a` as the value `10`:
+#' fn()
+#'
+#' # env_bury() will bury the current scope of fn() behind a new
+#' # environment:
+#' fn <- env_bury(fn, a = 1000)
+#' fn()
+#'
+#' # Even though the symbol `a` is still defined deeper in the scope:
+#' orig_env$a
+env_bury <- function(.env, ...) {
+  env_ <- get_env(.env)
+  env_ <- child_env(env_, ...)
+  set_env(.env, env_)
+}
+
+#' Remove bindings from an environment
+#'
+#' `env_unbind()` is the complement of [env_bind()]. Like `env_has()`,
+#' it ignores the parent environments of `env` by default. Set
+#' `inherit` to `TRUE` to track down bindings in parent environments.
+#'
+#' @inheritParams get_env
+#' @param nms A character vector containing the names of the bindings
+#'   to remove.
+#' @param inherit Whether to look for bindings in the parent
+#'   environments.
+#' @return The input object `env` with its associated environment
+#'   modified in place, invisibly.
+#' @export
+#' @examples
+#' data <- set_names(as_list(letters), letters)
+#' env_bind(environment(), !!! data)
+#' env_has(environment(), letters)
+#'
+#' # env_unbind() removes bindings:
+#' env_unbind(environment(), letters)
+#' env_has(environment(), letters)
+#'
+#' # With inherit = TRUE, it removes bindings in parent environments
+#' # as well:
+#' parent <- child_env(NULL, foo = "a")
+#' env <- child_env(parent, foo = "b")
+#' env_unbind(env, "foo", inherit = TRUE)
+#' env_has(env, "foo", inherit = TRUE)
+env_unbind <- function(env = caller_env(), nms, inherit = FALSE) {
+  env_ <- get_env(env)
+
+  if (inherit) {
+    while(any(env_has(env_, nms, inherit = TRUE))) {
+      rm(list = nms, envir = env, inherits = TRUE)
+    }
+  } else {
+    rm(list = nms, envir = env)
+  }
+
+  invisible(env)
+}
+
+#' Does an environment have or see bindings?
+#'
+#' `env_has()` is a vectorised predicate that queries whether an
+#' environment owns bindings personally (with `inherit` set to
+#' `FALSE`, the default), or sees them in its own environment or in
+#' any of its parents (with `inherit = TRUE`).
+#'
+#' @inheritParams env_unbind
+#' @return A logical vector as long as `nms`.
+#' @export
+#' @examples
+#' parent <- child_env(NULL, foo = "foo")
+#' env <- child_env(parent, bar = "bar")
+#'
+#' # env does not own `foo` but sees it in its parent environment:
+#' env_has(env, "foo")
+#' env_has(env, "foo", inherit = TRUE)
+env_has <- function(env = caller_env(), nms, inherit = FALSE) {
+  map_lgl(nms, exists, envir = get_env(env), inherits = inherit)
+}
+
+#' Get an object from an environment
+#'
+#' `env_get()` extracts an object from an enviroment `env`. By
+#' default, it does not look in the parent environments.
+#'
+#' @inheritParams get_env
+#' @inheritParams env_has
+#' @param nm The name of a binding.
+#' @return An object if it exists. Otherwise, throws an error.
+#' @export
+#' @examples
+#' parent <- child_env(NULL, foo = "foo")
+#' env <- child_env(parent, bar = "bar")
+#'
+#' # This throws an error because `foo` is not directly defined in env:
+#' # env_get(env, "foo")
+#'
+#' # However `foo` can be fetched in the parent environment:
+#' env_get(env, "foo", inherit = TRUE)
+env_get <- function(env = caller_env(), nm, inherit = FALSE) {
+  get(nm, envir = get_env(env), inherits = inherit)
+}
+
+#' Names of symbols bound in an environment
+#'
+#' `env_names()` returns object names from an enviroment `env` as a
+#' character vector. All names are returned, even those starting with
+#' a dot.
+#'
+#' @section Names of symbols and objects:
+#'
+#' Technically, objects are bound to symbols rather than strings,
+#' since the R interpreter evaluates symbols (see [is_expr()] for a
+#' discussion of symbolic objects versus literal objects). However it
+#' is often more convenient to work with strings. In rlang
+#' terminology, the string corresponding to a symbol is called the
+#' _name_ of the symbol (or by extension the name of an object bound
+#' to a symbol).
+#'
+#' @section Encoding:
+#'
+#' There are deep encoding issues when you convert a string to symbol
+#' and vice versa. Symbols are _always_ in the native encoding (see
+#' [set_chr_encoding()]). If that encoding (let's say latin1) cannot
+#' support some characters, these characters are serialised to
+#' ASCII. That's why you sometimes see strings looking like
+#' `<U+1234>`, especially if you're running Windows (as R doesn't
+#' support UTF-8 as native encoding on that platform).
+#'
+#' To alleviate some of the encoding pain, `env_names()` always
+#' returns a UTF-8 character vector (which is fine even on Windows)
+#' with unicode points unserialised.
+#'
+#' @inheritParams get_env
+#' @return A character vector of object names.
+#' @export
+#' @examples
+#' env <- env(a = 1, b = 2)
+#' env_names(env)
+env_names <- function(env) {
+  nms <- names(get_env(env))
+  .Call(rlang_unescape_character, nms)
+}
+
+#' Clone an environment
+#'
+#' This creates a new environment containing exactly the same objects,
+#' optionally with a new parent.
+#'
+#' @inheritParams get_env
+#' @param parent The parent of the cloned environment.
+#' @export
+#' @examples
+#' env <- env(!!! mtcars)
+#' clone <- env_clone(env)
+#' identical(env, clone)
+#' identical(env$cyl, clone$cyl)
+env_clone <- function(env, parent = env_parent(env)) {
+  env <- get_env(env)
+  list2env(as.list(env, all.names = TRUE), parent = parent)
+}
+
+#' Does environment inherit from another environment?
+#'
+#' This returns `TRUE` if `x` has `ancestor` among its parents.
+#'
+#' @inheritParams get_env
+#' @param ancestor Another environment from which `x` might inherit.
+#' @export
+env_inherits <- function(env, ancestor) {
+  env <- get_env(env)
+  stopifnot(is_env(ancestor) && is_env(env))
+
+  while(!is_empty_env(env_parent(env))) {
+    env <- env_parent(env)
+    if (is_reference(env, ancestor)) {
+      return(TRUE)
+    }
+  }
+
+  is_empty_env(env)
+}
+
+
+#' Scoped environments
+#'
+#' @description
+#'
+#' Scoped environments are named environments which form a
+#' parent-child hierarchy called the search path. They define what
+#' objects you can see (are in scope) from your workspace. They
+#' typically are package environments, i.e. special environments
+#' containing all exported functions from a package (and whose parent
+#' environment is the package namespace, which also contains
+#' unexported functions). Package environments are attached to the
+#' search path with [base::library()]. Note however that any
+#' environment can be attached to the search path, for example with
+#' the unrecommended [base::attach()] base function which transforms
+#' vectors to scoped environments.
+#'
+#' - You can list all scoped environments with `scoped_names()`. Unlike
+#'   [base::search()], it also mentions the empty environment that
+#'   terminates the search path (it is given the name `"NULL"`).
+#'
+#' - `scoped_envs()` returns all environments on the search path,
+#'   including the empty environment.
+#'
+#' - `pkg_env()` takes a package name and returns the scoped
+#'   environment of packages if they are attached to the search path,
+#'   and throws an error otherwise.
+#'
+#' - `is_scoped()` allows you to check whether a named environment is
+#'   on the search path.
+#'
+#' @section Search path:
+#'
+#' The search path is a chain of scoped environments where newly
+#' attached environments are the childs of earlier ones. However, the
+#' global environment, where everything you define at top-level ends
+#' up, is pinned as the head of that linked chain. Likewise, the base
+#' package environment is pinned as the tail of the chain. You can
+#' retrieve those environments with `global_env()` and `base_env()`
+#' respectively. The global environment is also the environment of the
+#' very first evaluation frame on the stack, see [global_frame()] and
+#' [ctxt_stack()].
+#'
+#' @param nm The name of an environment attached to the search
+#'   path. Call [base::search()] to see what is currently on the path.
+#' @export
+#' @examples
+#' # List the names of scoped environments:
+#' nms <- scoped_names()
+#' nms
+#'
+#' # The global environment is always the first in the chain:
+#' scoped_env(nms[[1]])
+#'
+#' # And the scoped environment of the base package is always the last:
+#' scoped_env(nms[[length(nms)]])
+#'
+#' # These two environments have their own shortcuts:
+#' global_env()
+#' base_env()
+#'
+#' # Packages appear in the search path with a special name. Use
+#' # pkg_env_name() to create that name:
+#' pkg_env_name("rlang")
+#' scoped_env(pkg_env_name("rlang"))
+#'
+#' # Alternatively, get the scoped environment of a package with
+#' # pkg_env():
+#' pkg_env("utils")
+scoped_env <- function(nm) {
+  if (identical(nm, "NULL")) {
+    return(empty_env())
+  }
+  if (!is_scoped(nm)) {
+    stop(paste0(nm, " is not in scope"), call. = FALSE)
+  }
+  as.environment(nm)
+}
+#' @rdname scoped_env
+#' @param pkg The name of a package.
+#' @export
+pkg_env <- function(pkg) {
+  pkg_name <- pkg_env_name(pkg)
+  scoped_env(pkg_name)
+}
+#' @rdname scoped_env
+#' @export
+pkg_env_name <- function(pkg) {
+  paste0("package:", pkg)
+}
+
+#' @rdname scoped_env
+#' @export
+scoped_names <- function() {
+  c(search(), "NULL")
+}
+#' @rdname scoped_env
+#' @export
+scoped_envs <- function() {
+  envs <- c(.GlobalEnv, env_parents(.GlobalEnv))
+  set_names(envs, scoped_names())
+}
+#' @rdname scoped_env
+#' @export
+is_scoped <- function(nm) {
+  if (!is_scalar_character(nm)) {
+    stop("`nm` must be a string", call. = FALSE)
+  }
+  nm %in% scoped_names()
+}
+
+#' @rdname scoped_env
+#' @export
+base_env <- baseenv
+#' @rdname scoped_env
+#' @export
+global_env <- globalenv
+
+#' Get the empty environment
+#'
+#' The empty environment is the only one that does not have a parent.
+#' It is always used as the tail of a scope chain such as the search
+#' path (see [scoped_names()]).
+#'
+#' @export
+#' @examples
+#' # Create environments with nothing in scope:
+#' child_env(empty_env())
+empty_env <- emptyenv
+
+#' Get the namespace of a package
+#'
+#' Namespaces are the environment where all the functions of a package
+#' live. The parent environments of namespaces are the `imports`
+#' environments, which contain all the functions imported from other
+#' packages.
+#'
+#' @param pkg The name of a package. If `NULL`, the surrounding
+#'   namespace is returned, or an error is issued if not called within
+#'   a namespace. If a function, the enclosure of that function is
+#'   checked.
+#' @seealso [pkg_env()]
+#' @export
+ns_env <- function(pkg = NULL) {
+  if (is_null(pkg)) {
+    bottom <- topenv(caller_env())
+    if (!isNamespace(bottom)) abort("not in a namespace")
+    bottom
+  } else if (is_function(pkg)) {
+    env <- env_parent(pkg)
+    if (isNamespace(env)) {
+      env
+    } else {
+      NULL
+    }
+  } else {
+    asNamespace(pkg)
+  }
+}
+#' @rdname ns_env
+#' @export
+ns_imports_env <- function(pkg = NULL) {
+  env_parent(ns_env(pkg))
+}
+#' @rdname ns_env
+#' @export
+ns_env_name <- function(pkg = NULL) {
+  if (is_null(pkg)) {
+    pkg <- with_env(caller_env(), ns_env())
+  } else if (is_function(pkg)) {
+    pkg <- get_env(pkg)
+  }
+  unname(getNamespaceName(pkg))
+}
+
+#' Is a package installed in the library?
+#'
+#' This checks that a package is installed with minimal side effects.
+#' If installed, the package will be loaded but not attached.
+#'
+#' @param pkg The name of a package.
+#' @return `TRUE` if the package is installed, `FALSE` otherwise.
+#' @export
+#' @examples
+#' is_installed("utils")
+#' is_installed("ggplot5")
+is_installed <- function(pkg) {
+  is_true(requireNamespace(pkg, quietly = TRUE))
+}
+
+
+env_type <- function(env) {
+  if (is_reference(env, global_env())) {
+    "global"
+  } else if (is_reference(env, empty_env())) {
+    "empty"
+  } else if (is_reference(env, base_env())) {
+    "base"
+  } else if (is_frame_env(env)) {
+    "frame"
+  } else {
+    "local"
+  }
+}
+friendly_env_type <- function(type) {
+  switch(type,
+    global = "the global environment",
+    empty = "the empty environment",
+    base = "the base environment",
+    frame = "a frame environment",
+    local = "a local environment",
+    abort("Internal error: unknown environment type")
+  )
+}
+
+env_format <- function(env) {
+  type <- env_type(env)
+
+  if (type %in% c("frame", "local")) {
+    addr <- sxp_address(get_env(env))
+    type <- paste(type, addr)
+  }
+
+  type
+}
diff --git a/R/eval-tidy.R b/R/eval-tidy.R
new file mode 100644
index 0000000..75d294d
--- /dev/null
+++ b/R/eval-tidy.R
@@ -0,0 +1,350 @@
+#' Evaluate an expression tidily
+#'
+#' @description
+#'
+#' `eval_tidy()` is a variant of [base::eval()] and [eval_bare()] that
+#' powers the [tidy evaluation
+#' framework](http://rlang.tidyverse.org/articles/tidy-evaluation.html).
+#' It evaluates `expr` in an [overscope][as_overscope] where the
+#' special definitions enabling tidy evaluation are installed. This
+#' enables the following features:
+#'
+#' - Overscoped data. You can supply a data frame or list of named
+#'   vectors to the `data` argument. The data contained in this list
+#'   has precedence over the objects in the contextual environment.
+#'   This is similar to how [base::eval()] accepts a list instead of
+#'   an environment.
+#'
+#' - Self-evaluation of quosures. Within the overscope, quosures act
+#'   like promises. When a quosure within an expression is evaluated,
+#'   it automatically invokes the quoted expression in the captured
+#'   environment (chained to the overscope). Note that quosures do not
+#'   always get evaluated because of lazy semantics, e.g. `TRUE ||
+#'   ~never_called`.
+#'
+#' - Pronouns. `eval_tidy()` installs the `.env` and `.data`
+#'   pronouns. `.env` contains a reference to the calling environment,
+#'   while `.data` refers to the `data` argument. These pronouns lets
+#'   you be explicit about where to find values and throw errors if
+#'   you try to access non-existent values.
+#'
+#' @param expr An expression.
+#' @param data A list (or data frame). This is passed to the
+#'   [as_dictionary()] coercer, a generic used to transform an object
+#'   to a proper data source. If you want to make `eval_tidy()` work
+#'   for your own objects, you can define a method for this generic.
+#' @param env The lexical environment in which to evaluate `expr`.
+#' @seealso [quo()], [quasiquotation]
+#' @export
+#' @examples
+#' # Like base::eval() and eval_bare(), eval_tidy() evaluates quoted
+#' # expressions:
+#' expr <- expr(1 + 2 + 3)
+#' eval_tidy(expr)
+#'
+#' # Like base::eval(), it lets you supply overscoping data:
+#' foo <- 1
+#' bar <- 2
+#' expr <- quote(list(foo, bar))
+#' eval_tidy(expr, list(foo = 100))
+#'
+#' # The main difference is that quosures self-evaluate within
+#' # eval_tidy():
+#' quo <- quo(1 + 2 + 3)
+#' eval(quo)
+#' eval_tidy(quo)
+#'
+#' # Quosures also self-evaluate deep in an expression not just when
+#' # directly supplied to eval_tidy():
+#' expr <- expr(list(list(list(!! quo))))
+#' eval(expr)
+#' eval_tidy(expr)
+#'
+#' # Self-evaluation of quosures is powerful because they
+#' # automatically capture their enclosing environment:
+#' foo <- function(x) {
+#'   y <- 10
+#'   quo(x + y)
+#' }
+#' f <- foo(1)
+#'
+#' # This quosure refers to `x` and `y` from `foo()`'s evaluation
+#' # frame. That's evaluated consistently by eval_tidy():
+#' f
+#' eval_tidy(f)
+#'
+#'
+#' # Finally, eval_tidy() installs handy pronouns that allows users to
+#' # be explicit about where to find symbols. If you supply data,
+#' # eval_tidy() will look there first:
+#' cyl <- 10
+#' eval_tidy(quo(cyl), mtcars)
+#'
+#' # To avoid ambiguity and be explicit, you can use the `.env` and
+#' # `.data` pronouns:
+#' eval_tidy(quo(.data$cyl), mtcars)
+#' eval_tidy(quo(.env$cyl), mtcars)
+#'
+#' # Note that instead of using `.env` it is often equivalent to
+#' # unquote a value. The only difference is the timing of evaluation
+#' # since unquoting happens earlier (when the quosure is created):
+#' eval_tidy(quo(!! cyl), mtcars)
+#' @name eval_tidy
+eval_tidy <- function(expr, data = NULL, env = caller_env()) {
+  if (is_list(expr)) {
+    return(map(expr, eval_tidy, data = data))
+  }
+
+  if (!inherits(expr, "quosure")) {
+    expr <- new_quosure(expr, env)
+  }
+  overscope <- as_overscope(expr, data)
+  on.exit(overscope_clean(overscope))
+
+  overscope_eval_next(overscope, expr)
+}
+
+#' Data pronoun for tidy evaluation
+#'
+#' This pronoun is installed by functions performing [tidy
+#' evaluation][eval_tidy]. It allows you to refer to overscoped data
+#' explicitly.
+#'
+#' You can import this object in your package namespace to avoid `R
+#' CMD check` errors when referring to overscoped objects.
+#'
+#' @name tidyeval-data
+#' @export
+#' @examples
+#' quo <- quo(.data$foo)
+#' eval_tidy(quo, list(foo = "bar"))
+.data <- NULL
+delayedAssign(".data", as_dictionary(list(), read_only = TRUE))
+
+#' Tidy evaluation in a custom environment
+#'
+#' We recommend using [eval_tidy()] in your DSLs as much as possible
+#' to ensure some consistency across packages (`.data` and `.env`
+#' pronouns, etc). However, some DSLs might need a different
+#' evaluation environment. In this case, you can call `eval_tidy_()`
+#' with the bottom and the top of your custom overscope (see
+#' [as_overscope()] for more information).
+#'
+#' Note that `eval_tidy_()` always installs a `.env` pronoun in the
+#' bottom environment of your dynamic scope. This pronoun provides a
+#' shortcut to the original lexical enclosure (typically, the dynamic
+#' environment of a captured argument, see [enquo()]). It also
+#' cleans up the overscope after evaluation. See [overscope_eval_next()]
+#' for evaluating several quosures in the same overscope.
+#'
+#' @inheritParams eval_tidy
+#' @inheritParams as_overscope
+#' @export
+eval_tidy_ <- function(expr, bottom, top = NULL, env = caller_env()) {
+  top <- top %||% bottom
+  overscope <- new_overscope(bottom, top)
+  on.exit(overscope_clean(overscope))
+
+  if (!inherits(expr, "quosure")) {
+    expr <- new_quosure(expr, env)
+  }
+  overscope_eval_next(overscope, expr)
+}
+
+
+#' Create a dynamic scope for tidy evaluation
+#'
+#' Tidy evaluation works by rescoping a set of symbols (column names
+#' of a data frame for example) to custom bindings. While doing this,
+#' it is important to keep the original environment of captured
+#' expressions in scope. The gist of tidy evaluation is to create a
+#' dynamic scope containing custom bindings that should have
+#' precedence when expressions are evaluated, and chain this scope
+#' (set of linked environments) to the lexical enclosure of formulas
+#' under evaluation. During tidy evaluation, formulas are transformed
+#' into formula-promises and will self-evaluate their RHS as soon as
+#' they are called. The main trick of tidyeval is to consistently
+#' rechain the dynamic scope to the lexical enclosure of each tidy
+#' quote under evaluation.
+#'
+#' These functions are useful for embedding the tidy evaluation
+#' framework in your own DSLs with your own evaluating function. They
+#' let you create a custom dynamic scope. That is, a set of chained
+#' environments whose bottom serves as evaluation environment and
+#' whose top is rechained to the current lexical enclosure. But most
+#' of the time, you can just use [eval_tidy_()] as it will take
+#' care of installing the tidyeval components in your custom dynamic
+#' scope.
+#'
+#' * `as_overscope()` is the function that powers [eval_tidy()]. It
+#'   could be useful if you cannot use `eval_tidy()` for some reason,
+#'   but serves mostly as an example of how to build a dynamic scope
+#'   for tidy evaluation. In this case, it creates pronouns `.data`
+#'   and `.env` and buries all dynamic bindings from the supplied
+#'   `data` in new environments.
+#'
+#' * `new_overscope()` is called by `as_overscope()` and
+#'   [eval_tidy_()]. It installs the definitions for making
+#'   formulas self-evaluate and for formula-guards. It also installs
+#'   the pronoun `.top_env` that helps keeping track of the boundary
+#'   of the dynamic scope. If you evaluate a tidy quote with
+#'   [eval_tidy_()], you don't need to use this.
+#'
+#' * `eval_tidy_()` is useful when you have several quosures to
+#'   evaluate in a same dynamic scope. That's a simple wrapper around
+#'   [eval_bare()] that updates the `.env` pronoun and rechains the
+#'   dynamic scope to the new formula enclosure to evaluate.
+#'
+#' * Once an expression has been evaluated in the tidy environment,
+#'   it's a good idea to clean up the definitions that make
+#'   self-evaluation of formulas possible `overscope_clean()`.
+#'   Otherwise your users may face unexpected results in specific
+#'   corner cases (e.g. when the evaluation environment is leaked, see
+#'   examples). Note that this function is automatically called by
+#'   [eval_tidy_()].
+#'
+#' @param quo A [quosure].
+#' @param data Additional data to put in scope.
+#' @return An overscope environment.
+#' @export
+#' @examples
+#' # Evaluating in a tidy evaluation environment enables all tidy
+#' # features:
+#' expr <- quote(list(.data$cyl, ~letters))
+#' f <- as_quosure(expr)
+#' overscope <- as_overscope(f, data = mtcars)
+#' overscope_eval_next(overscope, f)
+#'
+#' # However you need to cleanup the environment after evaluation.
+#' # Otherwise the leftover definitions for self-evaluation of
+#' # formulas might cause unexpected results:
+#' fn <- overscope_eval_next(overscope, ~function() ~letters)
+#' fn()
+#'
+#' overscope_clean(overscope)
+#' fn()
+as_overscope <- function(quo, data = NULL) {
+  data_src <- as_dictionary(data, read_only = TRUE)
+  enclosure <- f_env(quo) %||% base_env()
+
+  # Create bottom environment pre-chained to the lexical scope
+  bottom <- child_env(enclosure)
+
+  # Emulate dynamic scope for established data
+  if (is_vector(data)) {
+    bottom <- env_bury(bottom, !!! discard_unnamed(data))
+  } else if (is_env(data)) {
+    bottom <- env_clone(data, parent = bottom)
+  } else if (!is_null(data)) {
+    abort("`data` must be a list or an environment")
+  }
+
+  # Install data pronoun
+  bottom$.data <- data_src
+
+  new_overscope(bottom, enclosure = enclosure)
+}
+
+#' @rdname as_overscope
+#' @param bottom This is the environment (or the bottom of a set of
+#'   environments) containing definitions for overscoped symbols. The
+#'   bottom environment typically contains pronouns (like `.data`)
+#'   while its direct parents contain the overscoping bindings. The
+#'   last one of these parents is the `top`.
+#' @param top The top environment of the overscope. During tidy
+#'   evaluation, this environment is chained and rechained to lexical
+#'   enclosures of self-evaluating formulas (or quosures). This is the
+#'   mechanism that ensures hygienic scoping: the bindings in the
+#'   overscope have precedence, but the bindings in the dynamic
+#'   environment where the tidy quotes were created in the first place
+#'   are in scope as well.
+#' @param enclosure The default enclosure. After a quosure is done
+#'   self-evaluating, the overscope is rechained to the default
+#'   enclosure.
+#' @return A valid overscope: a child environment of `bottom`
+#'   containing the definitions enabling tidy evaluation
+#'   (self-evaluating quosures, formula-unguarding, ...).
+#' @export
+new_overscope <- function(bottom, top = NULL, enclosure = base_env()) {
+  top <- top %||% bottom
+
+  # Create a child because we don't know what might be in bottom_env.
+  # This way we can just remove all bindings between the parent of
+  # `overscope` and `overscope_top`. We don't want to clean everything in
+  # `overscope` in case the environment is leaked, e.g. through a
+  # closure that might rely on some local bindings installed by the
+  # user.
+  overscope <- child_env(bottom)
+
+  overscope$`~` <- f_self_eval(overscope, top)
+  overscope$.top_env <- top
+  overscope$.env <- enclosure
+
+  overscope
+}
+#' @rdname as_overscope
+#' @param overscope A valid overscope containing bindings for `~`,
+#'   `.top_env` and `_F` and whose parents contain overscoped bindings
+#'   for tidy evaluation.
+#' @param env The lexical enclosure in case `quo` is not a validly
+#'   scoped quosure. This is the [base environment][base_env] by
+#'   default.
+#' @export
+overscope_eval_next <- function(overscope, quo, env = base_env()) {
+  quo <- as_quosureish(quo, env)
+  lexical_env <- f_env(quo)
+
+  overscope$.env <- lexical_env
+  mut_env_parent(overscope$.top_env, lexical_env)
+
+  .Call(rlang_eval, f_rhs(quo), overscope)
+}
+#' @rdname as_overscope
+#' @export
+overscope_clean <- function(overscope) {
+  cur_env <- env_parent(overscope)
+  top_env <- overscope$.top_env %||% cur_env
+
+  # At this level we only want to remove what we have installed
+  env_unbind(overscope, c("~", ".top_env", ".env"))
+
+  while(!identical(cur_env, env_parent(top_env))) {
+    env_unbind(cur_env, names(cur_env))
+    cur_env <- env_parent(cur_env)
+  }
+
+  overscope
+}
+
+f_self_eval <- function(overscope, overscope_top) {
+  function(...) {
+    f <- sys.call()
+
+    # Evaluate formula in the overscope with base::`~`()
+    if (!inherits(f, "quosure")) {
+      return(tilde_eval(f, overscope))
+    }
+    if (quo_is_missing(f)) {
+      return(missing_arg())
+    }
+
+    # Swap enclosures temporarily by rechaining the top of the dynamic
+    # scope to the enclosure of the new formula, if it has one
+    mut_env_parent(overscope_top, f_env(f) %||% overscope$.env)
+    on.exit(mut_env_parent(overscope_top, overscope$.env))
+
+    .Call(rlang_eval, f_rhs(f), overscope)
+  }
+}
+tilde_eval <- function(f, env) {
+  if (is_env(attr(f, ".Environment"))) {
+    return(f)
+  }
+
+  # Inline the base primitive because overscopes override `~` to make
+  # quosures self-evaluate
+  f <- new_language(base::`~`, node_cdr(f))
+
+  # Change it back because the result still has the primitive inlined
+  mut_node_car(eval_bare(f, env), sym_tilde)
+}
diff --git a/R/eval.R b/R/eval.R
new file mode 100644
index 0000000..9f4a777
--- /dev/null
+++ b/R/eval.R
@@ -0,0 +1,205 @@
+#' Evaluate an expression in an environment
+#'
+#' `eval_bare()` is a lightweight version of the base function
+#' [base::eval()]. It does not accept supplementary data, but it is
+#' more efficient and does not clutter the evaluation stack.
+#' Technically, `eval_bare()` is a simple wrapper around the C
+#' function `Rf_eval()`.
+#'
+#' `base::eval()` inserts two call frames in the stack, the second of
+#' which features the `envir` parameter as frame environment. This may
+#' unnecessarily clutter the evaluation stack and it can change
+#' evaluation semantics with stack sensitive functions in the case
+#' where `env` is an evaluation environment of a stack frame (see
+#' [ctxt_stack()]). Since the base function `eval()` creates a new
+#' evaluation context with `env` as frame environment there are
+#' actually two contexts with the same evaluation environment on the
+#' stack when `expr` is evaluated. Thus, any command that looks up
+#' frames on the stack (stack sensitive functions) may find the
+#' parasite frame set up by `eval()` rather than the original frame
+#' targetted by `env`. As a result, code evaluated with `base::eval()`
+#' does not have the property of stack consistency, and stack
+#' sensitive functions like [base::return()], [base::parent.frame()]
+#' may return misleading results.
+#'
+#' @param expr An expression to evaluate.
+#' @param env The environment in which to evaluate the expression.
+#' @seealso with_env
+#' @export
+#' @examples
+#' # eval_bare() works just like base::eval():
+#' env <- child_env(NULL, foo = "bar")
+#' expr <- quote(foo)
+#' eval_bare(expr, env)
+#'
+#' # To explore the consequences of stack inconsistent semantics, let's
+#' # create a function that evaluates `parent.frame()` deep in the call
+#' # stack, in an environment corresponding to a frame in the middle of
+#' # the stack. For consistency we R's lazy evaluation semantics, we'd
+#' # expect to get the caller of that frame as result:
+#' fn <- function(eval_fn) {
+#'   list(
+#'     returned_env = middle(eval_fn),
+#'     actual_env = get_env()
+#'   )
+#' }
+#' middle <- function(eval_fn) {
+#'   deep(eval_fn, get_env())
+#' }
+#' deep <- function(eval_fn, eval_env) {
+#'   expr <- quote(parent.frame())
+#'   eval_fn(expr, eval_env)
+#' }
+#'
+#' # With eval_bare(), we do get the expected environment:
+#' fn(rlang::eval_bare)
+#'
+#' # But that's not the case with base::eval():
+#' fn(base::eval)
+#'
+#' # Another difference of eval_bare() compared to base::eval() is
+#' # that it does not insert parasite frames in the evaluation stack:
+#' get_stack <- quote(identity(ctxt_stack()))
+#' eval_bare(get_stack)
+#' eval(get_stack)
+eval_bare <- function(expr, env = parent.frame()) {
+  .Call(rlang_eval, expr, env)
+}
+
+#' Evaluate an expression within a given environment
+#'
+#' These functions evaluate `expr` within a given environment (`env`
+#' for `with_env()`, or the child of the current environment for
+#' `locally`). They rely on [eval_bare()] which features a lighter
+#' evaluation mechanism than base R [base::eval()], and which also has
+#' some subtle implications when evaluting stack sensitive functions
+#' (see help for [eval_bare()]).
+#'
+#' `locally()` is equivalent to the base function
+#' [base::local()] but it produces a much cleaner
+#' evaluation stack, and has stack-consistent semantics. It is thus
+#' more suited for experimenting with the R language.
+#'
+#' @inheritParams eval_bare
+#' @param env An environment within which to evaluate `expr`. Can be
+#'   an object with an [get_env()] method.
+#' @export
+#' @examples
+#' # with_env() is handy to create formulas with a given environment:
+#' env <- child_env("rlang")
+#' f <- with_env(env, ~new_formula())
+#' identical(f_env(f), env)
+#'
+#' # Or functions with a given enclosure:
+#' fn <- with_env(env, function() NULL)
+#' identical(get_env(fn), env)
+#'
+#'
+#' # Unlike eval() it doesn't create duplicates on the evaluation
+#' # stack. You can thus use it e.g. to create non-local returns:
+#' fn <- function() {
+#'   g(get_env())
+#'   "normal return"
+#' }
+#' g <- function(env) {
+#'   with_env(env, return("early return"))
+#' }
+#' fn()
+#'
+#'
+#' # Since env is passed to as_env(), it can be any object with an
+#' # as_env() method. For strings, the pkg_env() is returned:
+#' with_env("base", ~mtcars)
+#'
+#' # This can be handy to put dictionaries in scope:
+#' with_env(mtcars, cyl)
+with_env <- function(env, expr) {
+  .Call(rlang_eval, substitute(expr), as_env(env, caller_env()))
+}
+#' @rdname with_env
+#' @export
+locally <- function(expr) {
+  .Call(rlang_eval, substitute(expr), child_env(caller_env()))
+}
+
+#' Invoke a function with a list of arguments
+#'
+#' Normally, you invoke a R function by typing arguments manually. A
+#' powerful alternative is to call a function with a list of arguments
+#' assembled programmatically. This is the purpose of `invoke()`.
+#'
+#' Technically, `invoke()` is basically a version of [base::do.call()]
+#' that creates cleaner call traces because it does not inline the
+#' function and the arguments in the call (see examples). To achieve
+#' this, `invoke()` creates a child environment of `.env` with `.fn`
+#' and all arguments bound to new symbols (see [env_bury()]). It then
+#' uses the same strategy as [eval_bare()] to evaluate with minimal
+#' noise.
+#'
+#' @param .fn A function to invoke. Can be a function object or the
+#'   name of a function in scope of `.env`.
+#' @param .args,... List of arguments (possibly named) to be passed to
+#'   `.fn`.
+#' @param .env The environment in which to call `.fn`.
+#' @param .bury A character vector of length 2. The first string
+#'   specifies which name should the function have in the call
+#'   recorded in the evaluation stack. The second string specifies a
+#'   prefix for the argument names. Set `.bury` to `NULL` if you
+#'   prefer to inline the function and its arguments in the call.
+#' @export
+#' @examples
+#' # invoke() has the same purpose as do.call():
+#' invoke(paste, letters)
+#'
+#' # But it creates much cleaner calls:
+#' invoke(call_inspect, mtcars)
+#'
+#' # and stacktraces:
+#' fn <- function(...) sys.calls()
+#' invoke(fn, list(mtcars))
+#'
+#' # Compare to do.call():
+#' do.call(call_inspect, mtcars)
+#' do.call(fn, list(mtcars))
+#'
+#'
+#' # Specify the function name either by supplying a string
+#' # identifying the function (it should be visible in .env):
+#' invoke("call_inspect", letters)
+#'
+#' # Or by changing the .bury argument, with which you can also change
+#' # the argument prefix:
+#' invoke(call_inspect, mtcars, .bury = c("inspect!", "col"))
+invoke <- function(.fn, .args = list(), ...,
+                   .env = caller_env(), .bury = c(".fn", "")) {
+  args <- c(.args, list(...))
+
+  if (is_null(.bury) || !length(args)) {
+    if (is_scalar_character(.fn)) {
+      .fn <- env_get(.env, .fn, inherit = TRUE)
+    }
+    call <- lang(.fn, !!! args)
+    return(.Call(rlang_eval, call, .env))
+  }
+
+
+  if (!is_character(.bury, 2L)) {
+    abort("`.bury` must be a character vector of length 2")
+  }
+  arg_prefix <- .bury[[2]]
+  fn_nm <- .bury[[1]]
+
+  buried_nms <- paste0(arg_prefix, seq_along(args))
+  buried_args <- set_names(args, buried_nms)
+  .env <- env_bury(.env, !!! buried_args)
+  args <- set_names(buried_nms, names(args))
+  args <- syms(args)
+
+  if (is_function(.fn)) {
+    env_bind(.env, !! fn_nm := .fn)
+    .fn <- fn_nm
+  }
+
+  call <- lang(.fn, !!! args)
+  .Call(rlang_eval, call, .env)
+}
diff --git a/R/expr-lang.R b/R/expr-lang.R
new file mode 100644
index 0000000..7c31c2e
--- /dev/null
+++ b/R/expr-lang.R
@@ -0,0 +1,554 @@
+#' Create a call
+#'
+#' @description
+#'
+#' Language objects are (with symbols) one of the two types of
+#' [symbolic][is_symbolic] objects in R. These symbolic objects form
+#' the backbone of [expressions][is_expr]. They represent a value,
+#' unlike literal objects which are their own values. While symbols
+#' are directly [bound][env_bind] to a value, language objects
+#' represent _function calls_, which is why they are commonly referred
+#' to as calls.
+#'
+#' * `lang()` creates a call from a function name (or a literal
+#'   function to inline in the call) and a list of arguments.
+#'
+#' * `new_language()` is bare-bones and takes a head and a tail. The
+#'   head must be [callable][is_callable] and the tail must be a
+#'   [pairlist]. See section on calls as parse trees below. This
+#'   constructor is useful to avoid costly coercions between lists and
+#'   pairlists of arguments.
+#'
+#' @section Calls as parse tree:
+#'
+#' Language objects are structurally identical to
+#' [pairlists][pairlist]. They are containers of two objects, the head
+#' and the tail (also called the CAR and the CDR).
+#'
+#' - The head contains the function to call, either literally or
+#'   symbolically. If a literal function, the call is said to be
+#'   inlined. If a symbol, the call is named. If another call, it is
+#'   recursive. `foo()()` would be an example of a recursive call
+#'   whose head contains another call. See [lang_type_of()] and
+#'   [is_callable()].
+#'
+#' - The tail contains the arguments and must be a [pairlist].
+#'
+#' You can retrieve those components with [lang_head()] and
+#' [lang_tail()]. Since language nodes can contain other nodes (either
+#' calls or pairlists), they are capable of forming a tree. When R
+#' [parses][parse_expr] an expression, it saves the parse tree in a
+#' data structure composed of language and pairlist nodes. It is
+#' precisely because the parse tree is saved in first-class R objects
+#' that it is possible for functions to [capture][expr] their
+#' arguments unevaluated.
+#'
+#' @section Call versus language:
+#'
+#' `call` is the old S [mode][base::mode] of these objects while
+#' `language` is the R [type][base::typeof]. While it is usually
+#' better to avoid using S terminology, it would probably be even more
+#' confusing to systematically refer to "calls" as "language". rlang
+#' still uses `lang` as particle for function dealing with calls for
+#' consistency.
+#'
+#' @param .fn Function to call. Must be a callable object: a string,
+#'   symbol, call, or a function.
+#' @param ... Arguments to the call either in or out of a list. Dots
+#'   are evaluated with [explicit splicing][dots_list].
+#' @param .ns Namespace with which to prefix `.fn`. Must be a string
+#'   or symbol.
+#' @seealso lang_modify
+#' @export
+#' @examples
+#' # fn can either be a string, a symbol or a call
+#' lang("f", a = 1)
+#' lang(quote(f), a = 1)
+#' lang(quote(f()), a = 1)
+#'
+#' #' Can supply arguments individually or in a list
+#' lang(quote(f), a = 1, b = 2)
+#' lang(quote(f), splice(list(a = 1, b = 2)))
+#'
+#' # Creating namespaced calls:
+#' lang("fun", arg = quote(baz), .ns = "mypkg")
+lang <- function(.fn, ..., .ns = NULL) {
+  if (is_character(.fn)) {
+    if (length(.fn) != 1) {
+      abort("`.fn` must be a length 1 string")
+    }
+    .fn <- sym(.fn)
+  } else if (!is_callable(.fn)) {
+    abort("Can't create call to non-callable object")
+  }
+
+  if (!is_null(.ns)) {
+    .fn <- new_language(sym_namespace, pairlist(sym(.ns), .fn))
+  }
+
+  new_language(.fn, as.pairlist(dots_list(...)))
+}
+#' @rdname lang
+#' @param head A [callable][is_callable] object: a symbol, call, or
+#'   literal function.
+#' @param tail A [pairlist] of arguments.
+#' @export
+new_language <- function(head, tail = NULL) {
+  if (!is_callable(head)) {
+    abort("Can't create call to non-callable object")
+  }
+  if (!is_pairlist(tail)) {
+    abort("`tail` must be a pairlist")
+  }
+  .Call(rlang_new_language, head, tail)
+}
+
+#' Is an object callable?
+#'
+#' A callable object is an object that can be set as the head of a
+#' [call node][lang_head]. This includes [symbolic
+#' objects][is_symbolic] that evaluate to a function or literal
+#' functions.
+#'
+#' Note that strings may look like callable objects because
+#' expressions of the form `"list"()` are valid R code. However,
+#' that's only because the R parser transforms strings to symbols. It
+#' is not legal to manually set language heads to strings.
+#'
+#' @param x An object to test.
+#' @export
+#' @examples
+#' # Symbolic objects and functions are callable:
+#' is_callable(quote(foo))
+#' is_callable(base::identity)
+#'
+#' # mut_node_car() lets you modify calls without any checking:
+#' lang <- quote(foo(10))
+#' mut_node_car(lang, get_env())
+#'
+#' # Use is_callable() to check an input object is safe to put as CAR:
+#' obj <- base::identity
+#'
+#' if (is_callable(obj)) {
+#'   lang <- mut_node_car(lang, obj)
+#' } else {
+#'   abort("`obj` must be callable")
+#' }
+#'
+#' eval_bare(lang)
+is_callable <- function(x) {
+  is_symbolic(x) || is_function(x)
+}
+
+#' Is object a call?
+#'
+#' This function tests if `x` is a call (or [language
+#' object][lang]). This is a pattern-matching predicate that will
+#' return `FALSE` if `name` and `n` are supplied and the call does not
+#' match these properties. `is_unary_lang()` and `is_binary_lang()`
+#' hardcode `n` to 1 and 2.
+#'
+#' @param x An object to test. If a formula, the right-hand side is
+#'   extracted.
+#' @param name An optional name that the call should match. It is
+#'   passed to [sym()] before matching. This argument is vectorised
+#'   and you can supply a vector of names to match. In this case,
+#'   `is_lang()` returns `TRUE` if at least one name matches.
+#' @param n An optional number of arguments that the call should
+#'   match.
+#' @param ns The namespace of the call. If `NULL`, the namespace
+#'   doesn't participate in the pattern-matching. If an empty string
+#'   `""` and `x` is a namespaced call, `is_lang()` returns
+#'   `FALSE`. If any other string, `is_lang()` checks that `x` is
+#'   namespaced within `ns`.
+#' @seealso [is_expr()]
+#' @export
+#' @examples
+#' is_lang(quote(foo(bar)))
+#'
+#' # You can pattern-match the call with additional arguments:
+#' is_lang(quote(foo(bar)), "foo")
+#' is_lang(quote(foo(bar)), "bar")
+#' is_lang(quote(foo(bar)), quote(foo))
+#'
+#' # Match the number of arguments with is_lang():
+#' is_lang(quote(foo(bar)), "foo", 1)
+#' is_lang(quote(foo(bar)), "foo", 2)
+#'
+#' # Or more specifically:
+#' is_unary_lang(quote(foo(bar)))
+#' is_unary_lang(quote(+3))
+#' is_unary_lang(quote(1 + 3))
+#' is_binary_lang(quote(1 + 3))
+#'
+#'
+#' # By default, namespaced calls are tested unqualified:
+#' ns_expr <- quote(base::list())
+#' is_lang(ns_expr, "list")
+#'
+#' # You can also specify whether the call shouldn't be namespaced by
+#' # supplying an empty string:
+#' is_lang(ns_expr, "list", ns = "")
+#'
+#' # Or if it should have a namespace:
+#' is_lang(ns_expr, "list", ns = "utils")
+#' is_lang(ns_expr, "list", ns = "base")
+#'
+#'
+#' # The name argument is vectorised so you can supply a list of names
+#' # to match with:
+#' is_lang(quote(foo(bar)), c("bar", "baz"))
+#' is_lang(quote(foo(bar)), c("bar", "foo"))
+#' is_lang(quote(base::list), c("::", ":::", "$", "@"))
+is_lang <- function(x, name = NULL, n = NULL, ns = NULL) {
+  if (typeof(x) != "language") {
+    return(FALSE)
+  }
+
+  if (!is_null(ns)) {
+    if (identical(ns, "") && is_namespaced_lang(x, private = FALSE)) {
+      return(FALSE)
+    } else if (!is_namespaced_lang(x, ns, private = FALSE)) {
+      return(FALSE)
+    }
+  }
+
+  x <- lang_unnamespace(x)
+
+  if (!is_null(name)) {
+    # Wrap language objects in a list
+    if (!is_vector(name)) {
+      name <- list(name)
+    }
+
+    unmatched <- TRUE
+    for (elt in name) {
+      if (identical(x[[1]], sym(elt))) {
+        unmatched <- FALSE
+        break
+      }
+    }
+
+    if (unmatched) {
+      return(FALSE)
+    }
+  }
+
+  if (!is_null(n) && !has_length(x, n + 1L)) {
+    return(FALSE)
+  }
+
+  TRUE
+}
+#' @rdname is_lang
+#' @export
+is_unary_lang <- function(x, name = NULL, ns = NULL) {
+  is_lang(x, name, n = 1L, ns = ns)
+}
+#' @rdname is_lang
+#' @export
+is_binary_lang <- function(x, name = NULL, ns = NULL) {
+  is_lang(x, name, n = 2L, ns = ns)
+}
+
+#' Modify the arguments of a call
+#'
+#' @param .lang Can be a call (language object), a formula quoting a
+#'   call in the right-hand side, or a frame object from which to
+#'   extract the call expression.
+#' @param ... Named or unnamed expressions (constants, names or calls)
+#'   used to modify the call. Use `NULL` to remove arguments. Dots are
+#'   evaluated with [explicit splicing][dots_list].
+#' @param .standardise If `TRUE`, the call is standardised before hand
+#'   to match existing unnamed arguments to their argument names. This
+#'   prevents new named arguments from accidentally replacing original
+#'   unnamed arguments.
+#' @return A quosure if `.lang` is a quosure, a call otherwise.
+#' @seealso lang
+#' @export
+#' @examples
+#' call <- quote(mean(x, na.rm = TRUE))
+#'
+#' # Modify an existing argument
+#' lang_modify(call, na.rm = FALSE)
+#' lang_modify(call, x = quote(y))
+#'
+#' # Remove an argument
+#' lang_modify(call, na.rm = NULL)
+#'
+#' # Add a new argument
+#' lang_modify(call, trim = 0.1)
+#'
+#' # Add an explicit missing argument
+#' lang_modify(call, na.rm = quote(expr = ))
+#'
+#' # Supply a list of new arguments with splice()
+#' newargs <- list(na.rm = NULL, trim = 0.1)
+#' lang_modify(call, splice(newargs))
+#'
+#' # Supply a call frame to extract the frame expression:
+#' f <- function(bool = TRUE) {
+#'   lang_modify(call_frame(), splice(list(bool = FALSE)))
+#' }
+#' f()
+#'
+#'
+#' # You can also modify quosures inplace:
+#' f <- ~matrix(bar)
+#' lang_modify(f, quote(foo))
+lang_modify <- function(.lang, ..., .standardise = FALSE) {
+  args <- dots_list(...)
+  if (any(duplicated(names(args)) & names(args) != "")) {
+    abort("Duplicate arguments")
+  }
+
+  if (.standardise) {
+    quo <- lang_as_quosure(.lang, caller_env())
+    expr <- get_expr(lang_standardise(quo))
+  } else {
+    expr <- get_expr(.lang)
+  }
+
+  # Named arguments can be spliced by R
+  named <- have_name(args)
+  for (nm in names(args)[named]) {
+    expr[[nm]] <- args[[nm]]
+  }
+
+  if (any(!named)) {
+    # Duplicate list structure in case it wasn't before
+    if (!any(named)) {
+      expr <- duplicate(expr, shallow = TRUE)
+    }
+
+    remaining_args <- as.pairlist(args[!named])
+    expr <- node_append(expr, remaining_args)
+  }
+
+  set_expr(.lang, expr)
+}
+lang_as_quosure <- function(lang, env) {
+  if (is_frame(lang)) {
+    new_quosure(lang$expr, lang$env)
+  } else {
+    as_quosure(lang, env)
+  }
+}
+
+#' Standardise a call
+#'
+#' This is essentially equivalent to [base::match.call()], but with
+#' better handling of primitive functions.
+#'
+#' @param lang Can be a call (language object), a formula quoting a
+#'   call in the right-hand side, or a frame object from which to
+#'   extract the call expression.
+#' @return A quosure if `.lang` is a quosure, a raw call otherwise.
+#' @export
+lang_standardise <- function(lang) {
+  expr <- get_expr(lang)
+  if (is_frame(lang)) {
+    fn <- lang$fn
+  } else {
+    # The call name might be a literal, not necessarily a symbol
+    env <- get_env(lang, caller_env())
+    fn <- eval_bare(lang_head(expr), env)
+  }
+
+  matched <- match.call(as_closure(fn), expr)
+  set_expr(lang, matched)
+}
+
+#' Extract function from a call
+#'
+#' If a frame or formula, the function will be retrieved from the
+#' associated environment. Otherwise, it is looked up in the calling
+#' frame.
+#'
+#' @inheritParams lang_standardise
+#' @export
+#' @seealso [lang_name()]
+#' @examples
+#' # Extract from a quoted call:
+#' lang_fn(~matrix())
+#' lang_fn(quote(matrix()))
+#'
+#' # Extract the calling function
+#' test <- function() lang_fn(call_frame())
+#' test()
+lang_fn <- function(lang) {
+  if (is_frame(lang)) {
+    return(lang$fn)
+  }
+
+  expr <- get_expr(lang)
+  env <- get_env(lang, caller_env())
+
+  if (!is_lang(expr)) {
+    abort("`lang` must quote a lang")
+  }
+
+  switch_lang(expr,
+    recursive = abort("`lang` does not lang a named or inlined function"),
+    inlined = node_car(expr),
+    named = ,
+    namespaced = ,
+    eval_bare(node_car(expr), env)
+  )
+}
+
+#' Extract function name of a call
+#'
+#' @inheritParams lang_standardise
+#' @return A string with the function name, or `NULL` if the function
+#'   is anonymous.
+#' @seealso [lang_fn()]
+#' @export
+#' @examples
+#' # Extract the function name from quoted calls:
+#' lang_name(~foo(bar))
+#' lang_name(quote(foo(bar)))
+#'
+#' # Or from a frame:
+#' foo <- function(bar) lang_name(call_frame())
+#' foo(bar)
+#'
+#' # Namespaced calls are correctly handled:
+#' lang_name(~base::matrix(baz))
+#'
+#' # Anonymous and subsetted functions return NULL:
+#' lang_name(~foo$bar())
+#' lang_name(~foo[[bar]]())
+#' lang_name(~foo()())
+lang_name <- function(lang) {
+  lang <- get_expr(lang)
+  if (!is_lang(lang)) {
+    abort("`lang` must be a call or must wrap a call (e.g. in a quosure)")
+  }
+
+  switch_lang(lang,
+    named = as_string(node_car(lang)),
+    namespaced = as_string(node_cadr(node_cdar(lang))),
+    NULL
+  )
+}
+
+#' Return the head or tail of a call object
+#'
+#' @description
+#'
+#' These functions return the head or the tail of a call. See section
+#' on calls as parse trees in [lang()]. They are equivalent to
+#' [node_car()] and [node_cdr()] but support quosures and check that
+#' the input is indeed a call before retrieving the head or tail (it
+#' is unsafe to do this without type checking).
+#'
+#' `lang_head()` returns the head of the call without any conversion,
+#' unlike [lang_name()] which checks that the head is a symbol and
+#' converts it to a string. `lang_tail()` returns the pairlist of
+#' arguments (while [lang_args()] returns the same object converted to
+#' a regular list)
+#'
+#' @inheritParams lang_standardise
+#' @seealso [pairlist], [lang_args()], [lang()]
+#' @export
+#' @examples
+#' lang <- quote(foo(bar, baz))
+#' lang_head(lang)
+#' lang_tail(lang)
+lang_head <- function(lang) {
+  lang <- get_expr(lang)
+  stopifnot(is_lang(lang))
+  node_car(lang)
+}
+#' @rdname lang_head
+#' @export
+lang_tail <- function(lang) {
+  lang <- get_expr(lang)
+  stopifnot(is_lang(lang))
+  node_cdr(lang)
+}
+
+#' Extract arguments from a call
+#'
+#' @inheritParams lang_standardise
+#' @return A named list of arguments.
+#' @seealso [lang_tail()], [fn_fmls()] and [fn_fmls_names()]
+#' @export
+#' @examples
+#' call <- quote(f(a, b))
+#'
+#' # Subsetting a call returns the arguments converted to a language
+#' # object:
+#' call[-1]
+#'
+#' # See also lang_tail() which returns the arguments without
+#' # conversion as the original pairlist:
+#' str(lang_tail(call))
+#'
+#' # On the other hand, lang_args() returns a regular list that is
+#' # often easier to work with:
+#' str(lang_args(call))
+#'
+#' # When the arguments are unnamed, a vector of empty strings is
+#' # supplied (rather than NULL):
+#' lang_args_names(call)
+lang_args <- function(lang) {
+  lang <- get_expr(lang)
+  args <- as.list(lang_tail(lang))
+  set_names((args), names2(args))
+}
+
+#' @rdname lang_args
+#' @export
+lang_args_names <- function(lang) {
+  lang <- get_expr(lang)
+  names2(lang_tail(lang))
+}
+
+is_qualified_lang <- function(x) {
+  if (typeof(x) != "language") return(FALSE)
+  is_qualified_symbol(node_car(x))
+}
+is_namespaced_lang <- function(x, ns = NULL, private = NULL) {
+  if (typeof(x) != "language") return(FALSE)
+  if (!is_namespaced_symbol(node_car(x), ns, private)) return(FALSE)
+  TRUE
+}
+
+# Returns a new call whose CAR has been unqualified
+lang_unnamespace <- function(x) {
+  if (is_namespaced_lang(x)) {
+    lang <- lang(node_cadr(node_cdar(x)))
+    mut_node_cdr(lang, node_cdr(x))
+  } else {
+    x
+  }
+}
+
+# Qualified and namespaced symbols are actually calls
+is_qualified_symbol <- function(x) {
+  if (typeof(x) != "language") return(FALSE)
+
+  head <- node_cadr(node_cdr(x))
+  if (typeof(head) != "symbol") return(FALSE)
+
+  qualifier <- node_car(x)
+  identical(qualifier, sym_namespace) ||
+    identical(qualifier, sym_namespace2) ||
+    identical(qualifier, sym_dollar) ||
+    identical(qualifier, sym_at)
+}
+is_namespaced_symbol <- function(x, ns = NULL, private = NULL) {
+  if (typeof(x) != "language") return(FALSE)
+  if (!is_null(ns) && !identical(node_cadr(x), sym(ns))) return(FALSE)
+
+  head <- node_car(x)
+  if (is_null(private)) {
+    identical(head, sym_namespace) || identical(head, sym_namespace2)
+  } else if (private) {
+    identical(head, sym_namespace2)
+  } else {
+    identical(head, sym_namespace)
+  }
+}
diff --git a/R/expr-node.R b/R/expr-node.R
new file mode 100644
index 0000000..0c1704d
--- /dev/null
+++ b/R/expr-node.R
@@ -0,0 +1,282 @@
+#' Helpers for pairlist and language nodes
+#'
+#' @description
+#'
+#' Like any [parse tree](https://en.wikipedia.org/wiki/Parse_tree), R
+#' expressions are structured as trees of nodes. Each node has two
+#' components: the head and the tail (though technically there is
+#' actually a third component for argument names, see details). Due to
+#' R's [lisp roots](https://en.wikipedia.org/wiki/CAR_and_CDR), the
+#' head of a node (or cons cell) is called the CAR and the tail is
+#' called the CDR (pronounced _car_ and _cou-der_). While R's ordinary
+#' subsetting operators have builtin support for indexing into these
+#' trees and replacing elements, it is sometimes useful to manipulate
+#' the nodes more directly. This is the purpose of functions like
+#' `node_car()` and `mut_node_car()`. They are particularly useful to
+#' prototype algorithms for your C-level functions.
+#'
+#' * `node_car()` and `mut_node_car()` access or change the head of a node.
+#'
+#' * `node_cdr()` and `mut_node_cdr()` access or change the tail of a node.
+#'
+#' * Variants like `node_caar()` or `mut_node_cdar()` deal with the
+#'   CAR of the CAR of a node or the CDR of the CAR of a node
+#'   respectively. The letters in the middle indicate the type (CAR or
+#'   CDR) and order of access.
+#'
+#' * `node_tag()` and `mut_node_tag()` access or change the tag of a
+#'   node. This is meant for argument names and should only contain
+#'   symbols (not strings).
+#'
+#' * `node()` creates a new node from two components.
+#'
+#' @details
+#'
+#' R has two types of nodes to represent parse trees: language nodes,
+#' which represent function calls, and pairlist nodes, which represent
+#' arguments in a function call. These are the exact same data
+#' structures with a different name. This distinction is helpful for
+#' parsing the tree: the top-level node of a function call always has
+#' _language_ type while its arguments have _pairlist_ type.
+#'
+#' Note that it is risky to manipulate calls at the node level. First,
+#' the calls are changed inplace. This is unlike base R operators
+#' which create a new copy of the language tree for each modification.
+#' To make sure modifying a language object does not produce
+#' side-effects, rlang exports the `duplicate()` function to create
+#' deep copy (or optionally a shallow copy, i.e. only the top-level
+#' node is copied). The second danger is that R expects language trees
+#' to be structured as a `NULL`-terminated list. The CAR of a node is
+#' a data slot and can contain anything, including another node (which
+#' is how you form trees, as opposed to mere linked lists). On the
+#' other hand, the CDR has to be either another node, or `NULL`. If it
+#' is terminated by anything other than the `NULL` object, many R
+#' commands will crash, including functions like `str()`. It is up to
+#' you to ensure that the language list you have modified is
+#' `NULL`-terminated.
+#'
+#' Finally, all nodes can contain metadata in the TAG slot. This is
+#' meant for argument names and R expects tags to contain a symbol
+#' (not a string).
+#'
+#' @param x A language or pairlist node. Note that these functions are
+#'   barebones and do not perform any type checking.
+#' @param newcar,newcdr The new CAR or CDR for the node. These can be
+#'   any R objects.
+#' @param newtag The new tag for the node. This should be a symbol.
+#' @return Setters like `mut_node_car()` invisibly return `x` modified
+#'   in place. Getters return the requested node component.
+#' @seealso [duplicate()] for creating copy-safe objects,
+#'   [lang_head()] and [lang_tail()] as slightly higher level
+#'   alternatives that check their input, and [base::pairlist()] for
+#'   an easier way of creating a linked list of nodes.
+#' @examples
+#' # Changing a node component happens in place and can have side
+#' # effects. Let's create a language object and a copy of it:
+#' lang <- quote(foo(bar))
+#' copy <- lang
+#'
+#' # Using R's builtin operators to change the language tree does not
+#' # create side effects:
+#' copy[[2]] <- quote(baz)
+#' copy
+#' lang
+#'
+#' # On the other hand, the CAR and CDR operators operate in-place. Let's
+#' # create new objects since the previous examples triggered a copy:
+#' lang <- quote(foo(bar))
+#' copy <- lang
+#'
+#' # Now we change the argument pairlist of `copy`, making sure the new
+#' # arguments are NULL-terminated:
+#' mut_node_cdr(copy, node(quote(BAZ), NULL))
+#'
+#' # Or equivalently:
+#' mut_node_cdr(copy, pairlist(quote(BAZ)))
+#' copy
+#'
+#' # The original object has been changed in place:
+#' lang
+#' @name pairlist
+NULL
+
+#' @rdname pairlist
+#' @export
+node <- function(newcar, newcdr) {
+  .Call(rlang_cons, newcar, newcdr)
+}
+
+#' @rdname pairlist
+#' @export
+node_car <- function(x) {
+  .Call(rlang_car, x)
+}
+#' @rdname pairlist
+#' @export
+node_cdr <- function(x) {
+  .Call(rlang_cdr, x)
+}
+#' @rdname pairlist
+#' @export
+node_caar <- function(x) {
+  .Call(rlang_caar, x)
+}
+#' @rdname pairlist
+#' @export
+node_cadr <- function(x) {
+  .Call(rlang_cadr, x)
+}
+#' @rdname pairlist
+#' @export
+node_cdar <- function(x) {
+  .Call(rlang_cdar, x)
+}
+#' @rdname pairlist
+#' @export
+node_cddr <- function(x) {
+  .Call(rlang_cddr, x)
+}
+
+#' @rdname pairlist
+#' @export
+mut_node_car <- function(x, newcar) {
+  invisible(.Call(rlang_set_car, x, newcar))
+}
+#' @rdname pairlist
+#' @export
+mut_node_cdr <- function(x, newcdr) {
+  invisible(.Call(rlang_set_cdr, x, newcdr))
+}
+#' @rdname pairlist
+#' @export
+mut_node_caar <- function(x, newcar) {
+  invisible(.Call(rlang_set_caar, x, newcar))
+}
+#' @rdname pairlist
+#' @export
+mut_node_cadr <- function(x, newcar) {
+  invisible(.Call(rlang_set_cadr, x, newcar))
+}
+#' @rdname pairlist
+#' @export
+mut_node_cdar <- function(x, newcdr) {
+  invisible(.Call(rlang_set_cdar, x, newcdr))
+}
+#' @rdname pairlist
+#' @export
+mut_node_cddr <- function(x, newcdr) {
+  invisible(.Call(rlang_set_cddr, x, newcdr))
+}
+
+#' @rdname pairlist
+#' @export
+node_tag <- function(x) {
+  .Call(rlang_tag, x)
+}
+#' @rdname pairlist
+#' @export
+mut_node_tag <- function(x, newtag) {
+  invisible(.Call(rlang_set_tag, x, newtag))
+}
+
+#' Coerce to pairlist
+#'
+#' This transforms vector objects to a linked pairlist of nodes. See
+#' [pairlist] for information about the pairlist type.
+#'
+#' @param x An object to coerce.
+#' @seealso [pairlist]
+#' @export
+as_pairlist <- function(x) {
+  if (!is_vector(x)) {
+    abort_coercion(x, "pairlist")
+  }
+  as.vector(x, "pairlist")
+}
+
+#' Is object a node or pairlist?
+#'
+#' @description
+#'
+#' * `is_pairlist()` checks that `x` has type `pairlist` or `NULL`.
+#'   `NULL` is treated as a pairlist because it is the terminating
+#'   node of pairlists and an empty pairlist is thus the `NULL`
+#'   object itself.
+#'
+#' * `is_node()` checks that `x` has type `pairlist`.
+#'
+#' In other words, `is_pairlist()` tests for the data structure while
+#' `is_node()` tests for the internal type.
+#' @param x Object to test.
+#' @seealso [is_lang()] tests for language nodes.
+#' @export
+is_pairlist <- function(x) {
+  typeof(x) %in% c("pairlist", "NULL")
+}
+#' @rdname is_pairlist
+#' @export
+is_node <- function(x) {
+  typeof(x) == "pairlist"
+}
+
+
+#' Duplicate an R object
+#'
+#' In R semantics, objects are copied by value. This means that
+#' modifying the copy leaves the original object intact. Since,
+#' copying data in memory is an expensive operation, copies in R are
+#' as lazy as possible. They only happen when the new object is
+#' actually modified. However, some operations (like [mut_node_car()]
+#' or [mut_node_cdr()]) do not support copy-on-write. In those cases,
+#' it is necessary to duplicate the object manually in order to
+#' preserve copy-by-value semantics.
+#'
+#' Some objects are not duplicable, like symbols and environments.
+#' `duplicate()` returns its input for these unique objects.
+#'
+#' @param x Any R object. However, uncopyable types like symbols and
+#'   environments are returned as is (just like with `<-`).
+#' @param shallow This is relevant for recursive data structures like
+#'   lists, calls and pairlists. A shallow copy only duplicates the
+#'   top-level data structure. The objects contained in the list are
+#'   still the same.
+#' @seealso pairlist
+#' @export
+duplicate <- function(x, shallow = FALSE) {
+  if (shallow) {
+    .Call(rlang_shallow_duplicate, x)
+  } else {
+    .Call(rlang_duplicate, x)
+  }
+}
+
+
+node_walk <- function(.x, .f, ...) {
+  cur <- .x
+  while(!is.null(cur)) {
+    .f(cur, ...)
+    cur <- node_cdr(cur)
+  }
+  NULL
+}
+node_walk_nonnull <- function(.x, .f, ...) {
+  cur <- .x
+  out <- NULL
+  while(!is.null(cur) && is.null(out)) {
+    out <- .f(cur, ...)
+    cur <- node_cdr(cur)
+  }
+  out
+}
+node_walk_last <- function(.x, .f, ...) {
+  cur <- .x
+  while(!is.null(node_cdr(cur))) {
+    cur <- node_cdr(cur)
+  }
+  .f(cur, ...)
+}
+
+node_append <- function(.x, .y) {
+  node_walk_last(.x, function(l) mut_node_cdr(l, .y))
+  .x
+}
diff --git a/R/expr-sym.R b/R/expr-sym.R
new file mode 100644
index 0000000..e4ada04
--- /dev/null
+++ b/R/expr-sym.R
@@ -0,0 +1,39 @@
+#' Create a symbol or list of symbols
+#'
+#' These functions take strings as input and turn them into symbols.
+#' Contrarily to `as.name()`, they convert the strings to the native
+#' encoding beforehand. This is necessary because symbols remove
+#' silently the encoding mark of strings (see [set_str_encoding()]).
+#'
+#' @param x A string or list of strings.
+#' @return A symbol for `sym()` and a list of symbols for `syms()`.
+#' @export
+sym <- function(x) {
+  if (is_symbol(x)) {
+    return(x)
+  }
+  if (!is_string(x)) {
+    abort("Only strings can be converted to symbols")
+  }
+  .Call(rlang_symbol, x)
+}
+#' @rdname sym
+#' @export
+syms <- function(x) {
+  map(x, sym)
+}
+
+#' Is object a symbol?
+#' @param x An object to test.
+#' @export
+is_symbol <- function(x) {
+  typeof(x) == "symbol"
+}
+
+sym_namespace <- quote(`::`)
+sym_namespace2 <- quote(`:::`)
+sym_dollar <- quote(`$`)
+sym_at <- quote(`@`)
+sym_tilde <- quote(`~`)
+sym_def <- quote(`:=`)
+sym_curly <- quote(`{`)
diff --git a/R/expr.R b/R/expr.R
new file mode 100644
index 0000000..3d09e71
--- /dev/null
+++ b/R/expr.R
@@ -0,0 +1,346 @@
+#' Raw quotation of an expression
+#'
+#' @description
+#'
+#' These functions return raw expressions (whereas [quo()] and
+#' variants return quosures). They support [quasiquotation]
+#' syntax.
+#'
+#' - `expr()` returns its argument unevaluated. It is equivalent to
+#'   [base::bquote()].
+#'
+#' - `enexpr()` takes an argument name and returns it unevaluated. It
+#'   is equivalent to [base::substitute()].
+#'
+#' - `exprs()` captures multiple expressions and returns a list. In
+#'   particular, it can capture expressions in `...`. It supports name
+#'   unquoting with `:=` (see [quos()]). It is equivalent to
+#'   `eval(substitute(alist(...)))`.
+#'
+#' See [is_expr()] for more about R expressions.
+#'
+#' @inheritParams quosure
+#' @seealso [quo()], [is_expr()]
+#' @return The raw expression supplied as argument. `exprs()` returns
+#'   a list of expressions.
+#' @export
+#' @examples
+#' # The advantage of expr() over quote() is that it unquotes on
+#' # capture:
+#' expr(list(1, !! 3 + 10))
+#'
+#' # Unquoting can be especially useful for successive transformation
+#' # of a captured expression:
+#' (expr <- quote(foo(bar)))
+#' (expr <- expr(inner(!! expr, arg1)))
+#' (expr <- expr(outer(!! expr, !!! lapply(letters[1:3], as.symbol))))
+#'
+#' # Unlike quo(), expr() produces expressions that can
+#' # be evaluated with base::eval():
+#' e <- quote(letters)
+#' e <- expr(toupper(!!e))
+#' eval(e)
+#'
+#' # Be careful if you unquote a quosure: you need to take the RHS
+#' # (and lose the scope information) to evaluate with eval():
+#' f <- quo(letters)
+#' e <- expr(toupper(!! get_expr(f)))
+#' eval(e)
+#'
+#' # On the other hand it's fine to unquote quosures if you evaluate
+#' # with eval_tidy():
+#' f <- quo(letters)
+#' e <- expr(toupper(!! f))
+#' eval_tidy(e)
+#'
+#' # exprs() lets you unquote names with the definition operator:
+#' nm <- "foo"
+#' exprs(a = 1, !! nm := 2)
+expr <- function(expr) {
+  enexpr(expr)
+}
+#' @rdname expr
+#' @export
+enexpr <- function(arg) {
+  if (missing(arg)) {
+    return(missing_arg())
+  }
+
+  capture <- lang(captureArg, substitute(arg))
+  arg <- eval_bare(capture, caller_env())
+  .Call(rlang_interp, arg$expr, arg$env, TRUE)
+}
+#' @rdname expr
+#' @inheritParams dots_values
+#' @param ... Arguments to extract.
+#' @export
+exprs <- function(..., .ignore_empty = "trailing") {
+  map(quos(..., .ignore_empty = .ignore_empty), f_rhs)
+}
+
+
+#' Is an object an expression?
+#'
+#' @description
+#'
+#' `is_expr()` tests for expressions, the set of objects that can be
+#' obtained from parsing R code. An expression can be one of two
+#' things: either a symbolic object (for which `is_symbolic()` returns
+#' `TRUE`), or a syntactic literal (testable with
+#' `is_syntactic_literal()`). Technically, calls can contain any R
+#' object, not necessarily symbolic objects or syntactic
+#' literals. However, this only happens in artificial
+#' situations. Expressions as we define them only contain numbers,
+#' strings, `NULL`, symbols, and calls: this is the complete set of R
+#' objects that can be created when R parses source code (e.g. from
+#' using [parse_expr()]).
+#'
+#' Note that we are using the term expression in its colloquial sense
+#' and not to refer to [expression()] vectors, a data type that wraps
+#' expressions in a vector and which isn't used much in modern R code.
+#'
+#' @details
+#'
+#' `is_symbolic()` returns `TRUE` for symbols and calls (objects with
+#' type `language`). Symbolic objects are replaced by their value
+#' during evaluation. Literals are the complement of symbolic
+#' objects. They are their own value and return themselves during
+#' evaluation.
+#'
+#' `is_syntactic_literal()` is a predicate that returns `TRUE` for the
+#' subset of literals that are created by R when parsing text (see
+#' [parse_expr()]): numbers, strings and `NULL`. Along with symbols,
+#' these literals are the terminating nodes in a parse tree.
+#'
+#' Note that in the most general sense, a literal is any R object that
+#' evaluates to itself and that can be evaluated in the empty
+#' environment. For instance, `quote(c(1, 2))` is not a literal, it is
+#' a call. However, the result of evaluating it in [base_env()] is a
+#' literal(in this case an atomic vector).
+#'
+#' Pairlists are also a kind of language objects. However, since they
+#' are mostly an internal data structure, `is_expr()` returns `FALSE`
+#' for pairlists. You can use `is_pairlist()` to explicitly check for
+#' them. Pairlists are the data structure for function arguments. They
+#' usually do not arise from R code because subsetting a call is a
+#' type-preserving operation. However, you can obtain the pairlist of
+#' arguments by taking the CDR of the call object from C code. The
+#' rlang function [lang_tail()] will do it from R. Another way in
+#' which pairlist of arguments arise is by extracting the argument
+#' list of a closure with [base::formals()] or [fn_fmls()].
+#'
+#' @param x An object to test.
+#' @seealso [is_lang()] for a call predicate.
+#' @export
+#' @examples
+#' q1 <- quote(1)
+#' is_expr(q1)
+#' is_syntactic_literal(q1)
+#'
+#' q2 <- quote(x)
+#' is_expr(q2)
+#' is_symbol(q2)
+#'
+#' q3 <- quote(x + 1)
+#' is_expr(q3)
+#' is_lang(q3)
+#'
+#'
+#' # Atomic expressions are the terminating nodes of a call tree:
+#' # NULL or a scalar atomic vector:
+#' is_syntactic_literal("string")
+#' is_syntactic_literal(NULL)
+#'
+#' is_syntactic_literal(letters)
+#' is_syntactic_literal(quote(call()))
+#'
+#' # Parsable literals have the property of being self-quoting:
+#' identical("foo", quote("foo"))
+#' identical(1L, quote(1L))
+#' identical(NULL, quote(NULL))
+#'
+#' # Like any literals, they can be evaluated within the empty
+#' # environment:
+#' eval_bare(quote(1L), empty_env())
+#'
+#' # Whereas it would fail for symbolic expressions:
+#' # eval_bare(quote(c(1L, 2L)), empty_env())
+#'
+#'
+#' # Pairlists are also language objects representing argument lists.
+#' # You will usually encounter them with extracted formals:
+#' fmls <- formals(is_expr)
+#' typeof(fmls)
+#'
+#' # Since they are mostly an internal data structure, is_expr()
+#' # returns FALSE for pairlists, so you will have to check explicitly
+#' # for them:
+#' is_expr(fmls)
+#' is_pairlist(fmls)
+#'
+#' # Note that you can also extract call arguments as a pairlist:
+#' lang_tail(quote(fn(arg1, arg2 = "foo")))
+is_expr <- function(x) {
+  is_symbolic(x) || is_syntactic_literal(x)
+}
+#' @export
+#' @rdname is_expr
+is_syntactic_literal <- function(x) {
+  typeof(x) == "NULL" || (length(x) == 1 && typeof(x) %in% parsable_atomic_types)
+}
+#' @export
+#' @rdname is_expr
+is_symbolic <- function(x) {
+  typeof(x) %in% c("language", "symbol")
+}
+
+
+#' Turn an expression to a label
+#'
+#' `expr_text()` turns the expression into a single string, which
+#' might be multi-line. `expr_name()` is suitable for formatting
+#' names. It works best with symbols and scalar types, but also
+#' accepts calls. `expr_label()` formats the expression nicely for use
+#' in messages.
+#'
+#' @param expr An expression to labellise.
+#' @export
+#' @examples
+#' # To labellise a function argument, first capture it with
+#' # substitute():
+#' fn <- function(x) expr_label(substitute(x))
+#' fn(x:y)
+#'
+#' # Strings are encoded
+#' expr_label("a\nb")
+#'
+#' # Names and expressions are quoted with ``
+#' expr_label(quote(x))
+#' expr_label(quote(a + b + c))
+#'
+#' # Long expressions are collapsed
+#' expr_label(quote(foo({
+#'   1 + 2
+#'   print(x)
+#' })))
+expr_label <- function(expr) {
+  if (is.character(expr)) {
+    encodeString(expr, quote = '"')
+  } else if (is.atomic(expr)) {
+    format(expr)
+  } else if (is.name(expr)) {
+    paste0("`", as.character(expr), "`")
+  } else {
+    chr <- deparse_one(expr)
+    paste0("`", chr, "`")
+  }
+}
+#' @rdname expr_label
+#' @export
+expr_name <- function(expr) {
+  switch_type(expr,
+    symbol = as_string(expr),
+    quosure = ,
+    language = {
+      name <- deparse_one(expr)
+      name <- gsub("\n.*$", "...", name)
+      name
+    },
+    if (is_scalar_atomic(expr)) {
+      # So 1L is translated to "1" and not "1L"
+      as.character(expr)
+    } else if (length(expr) == 1) {
+      name <- expr_text(expr)
+      name <- gsub("\n.*$", "...", name)
+      name
+    } else {
+      abort("`expr` must quote a symbol, scalar, or call")
+    }
+  )
+}
+#' @rdname expr_label
+#' @export
+#' @param width Width of each line.
+#' @param nlines Maximum number of lines to extract.
+expr_text <- function(expr, width = 60L, nlines = Inf) {
+  str <- deparse(expr, width.cutoff = width)
+
+  if (length(str) > nlines) {
+    str <- c(str[seq_len(nlines - 1)], "...")
+  }
+
+  paste0(str, collapse = "\n")
+}
+deparse_one <- function(expr) {
+  chr <- deparse(expr)
+  if (length(chr) > 1) {
+    dot_call <- lang(expr[[1]], quote(...))
+    chr <- paste(deparse(dot_call), collapse = "\n")
+  }
+  chr
+}
+
+#' Set and get an expression
+#'
+#' These helpers are useful to make your function work generically
+#' with quosures and raw expressions. First call `get_expr()` to
+#' extract an expression. Once you're done processing the expression,
+#' call `set_expr()` on the original object to update the expression.
+#' You can return the result of `set_expr()`, either a formula or an
+#' expression depending on the input type. Note that `set_expr()` does
+#' not change its input, it creates a new object.
+#'
+#' @param x An expression or one-sided formula. In addition,
+#'   `set_expr()` accept frames.
+#' @param value An updated expression.
+#' @param default A default expression to return when `x` is not an
+#'   expression wrapper. Defaults to `x` itself.
+#' @return The updated original input for `set_expr()`. A raw
+#'   expression for `get_expr()`.
+#' @export
+#' @examples
+#' f <- ~foo(bar)
+#' e <- quote(foo(bar))
+#' frame <- identity(identity(ctxt_frame()))
+#'
+#' get_expr(f)
+#' get_expr(e)
+#' get_expr(frame)
+#'
+#' set_expr(f, quote(baz))
+#' set_expr(e, quote(baz))
+set_expr <- function(x, value) {
+  if (is_quosureish(x)) {
+    f_rhs(x) <- value
+    x
+  } else {
+    value
+  }
+}
+#' @rdname set_expr
+#' @export
+get_expr <- function(x, default = x) {
+  if (is_quosureish(x)) {
+    f_rhs(x)
+  } else if (inherits(x, "frame")) {
+    x$expr
+  } else {
+    default
+  }
+}
+
+expr_type_of <- function(x) {
+  if (missing(x)) {
+    return("missing")
+  }
+
+  type <- typeof(x)
+  if (type %in% c("symbol", "language", "pairlist", "NULL")) {
+    type
+  } else {
+    "literal"
+  }
+}
+switch_expr <- function(.x, ...) {
+  switch(expr_type_of(.x), ...)
+}
diff --git a/R/fn.R b/R/fn.R
new file mode 100644
index 0000000..96aae72
--- /dev/null
+++ b/R/fn.R
@@ -0,0 +1,412 @@
+#' Create a function
+#'
+#' This constructs a new function given it's three components:
+#' list of arguments, body code and parent environment.
+#'
+#' @param args A named list of default arguments.  Note that if you
+#'   want arguments that don't have defaults, you'll need to use the
+#'   special function [alist], e.g. `alist(a = , b = 1)`
+#' @param body A language object representing the code inside the
+#'   function. Usually this will be most easily generated with
+#'   [base::quote()]
+#' @param env The parent environment of the function, defaults to the
+#'   calling environment of `make_function`
+#' @export
+#' @examples
+#' f <- function(x) x + 3
+#' g <- new_function(alist(x = ), quote(x + 3))
+#'
+#' # The components of the functions are identical
+#' identical(formals(f), formals(g))
+#' identical(body(f), body(g))
+#' identical(environment(f), environment(g))
+#'
+#' # But the functions are not identical because f has src code reference
+#' identical(f, g)
+#'
+#' attr(f, "srcref") <- NULL
+#' # Now they are:
+#' stopifnot(identical(f, g))
+new_function <- function(args, body, env = caller_env()) {
+  stopifnot(all(have_name(args)), is_expr(body), is_env(env))
+
+  args <- as.pairlist(args)
+  eval_bare(call("function", args, body), env)
+}
+
+prim_eval <- eval(quote(sys.function(0)))
+is_prim_eval <- function(x) identical(x, prim_eval)
+
+#' Name of a primitive function
+#' @param prim A primitive function such as [base::c()].
+#' @export
+prim_name <- function(prim) {
+  stopifnot(is_primitive(prim))
+
+  # Workaround because R_FunTab is not public
+  name <- format(prim)
+  name <- sub("^.Primitive\\(\"", "", name)
+  name <- sub("\"\\)$", "", name)
+  name
+}
+
+#' Extract arguments from a function
+#'
+#' `fn_fmls()` returns a named list of formal arguments.
+#' `fn_fmls_names()` returns the names of the arguments.
+#' `fn_fmls_syms()` returns formals as a named list of symbols. This
+#' is especially useful for forwarding arguments in [constructed
+#' calls][lang].
+#'
+#' Unlike `formals()`, these helpers also work with primitive
+#' functions. See [is_function()] for a discussion of primitive and
+#' closure functions.
+#'
+#' @param fn A function. It is lookep up in the calling frame if not
+#'   supplied.
+#' @seealso [lang_args()] and [lang_args_names()]
+#' @export
+#' @examples
+#' # Extract from current call:
+#' fn <- function(a = 1, b = 2) fn_fmls()
+#' fn()
+#'
+#' # Works with primitive functions:
+#' fn_fmls(base::switch)
+#'
+#' # fn_fmls_syms() makes it easy to forward arguments:
+#' lang("apply", !!! fn_fmls_syms(lapply))
+fn_fmls <- function(fn = caller_fn()) {
+  fn <- as_closure(fn)
+  formals(fn)
+}
+#' @rdname fn_fmls
+#' @export
+fn_fmls_names <- function(fn = caller_fn()) {
+  args <- fn_fmls(fn)
+  names(args)
+}
+#' @rdname fn_fmls
+#' @export
+fn_fmls_syms <- function(fn = caller_fn()) {
+  nms <- set_names(fn_fmls_names(fn))
+  names(nms)[match("...", nms)] <- ""
+  syms(nms)
+}
+
+
+#' Is object a function?
+#'
+#' The R language defines two different types of functions: primitive
+#' functions, which are low-level, and closures, which are the regular
+#' kind of functions.
+#'
+#' Closures are functions written in R, named after the way their
+#' arguments are scoped within nested environments (see
+#' https://en.wikipedia.org/wiki/Closure_(computer_programming)).  The
+#' root environment of the closure is called the closure
+#' environment. When closures are evaluated, a new environment called
+#' the evaluation frame is created with the closure environment as
+#' parent. This is where the body of the closure is evaluated. These
+#' closure frames appear on the evaluation stack (see [ctxt_stack()]),
+#' as opposed to primitive functions which do not necessarily have
+#' their own evaluation frame and never appear on the stack.
+#'
+#' Primitive functions are more efficient than closures for two
+#' reasons. First, they are written entirely in fast low-level
+#' code. Secondly, the mechanism by which they are passed arguments is
+#' more efficient because they often do not need the full procedure of
+#' argument matching (dealing with positional versus named arguments,
+#' partial matching, etc). One practical consequence of the special
+#' way in which primitives are passed arguments this is that they
+#' technically do not have formal arguments, and [formals()] will
+#' return `NULL` if called on a primitive function. See [fn_fmls()]
+#' for a function that returns a representation of formal arguments
+#' for primitive functions. Finally, primitive functions can either
+#' take arguments lazily, like R closures do, or evaluate them eagerly
+#' before being passed on to the C code. The former kind of primitives
+#' are called "special" in R terminology, while the latter is referred
+#' to as "builtin". `is_primitive_eager()` and `is_primitive_lazy()`
+#' allow you to check whether a primitive function evaluates arguments
+#' eagerly or lazily.
+#'
+#' You will also encounter the distinction between primitive and
+#' internal functions in technical documentation. Like primitive
+#' functions, internal functions are defined at a low level and
+#' written in C. However, internal functions have no representation in
+#' the R language. Instead, they are called via a call to
+#' [base::.Internal()] within a regular closure. This ensures that
+#' they appear as normal R function objects: they obey all the usual
+#' rules of argument passing, and they appear on the evaluation stack
+#' as any other closures. As a result, [fn_fmls()] does not need to
+#' look in the `.ArgsEnv` environment to obtain a representation of
+#' their arguments, and there is no way of querying from R whether
+#' they are lazy ('special' in R terminology) or eager ('builtin').
+#'
+#' You can call primitive functions with [.Primitive()] and internal
+#' functions with [.Internal()]. However, calling internal functions
+#' in a package is forbidden by CRAN's policy because they are
+#' considered part of the private API. They often assume that they
+#' have been called with correctly formed arguments, and may cause R
+#' to crash if you call them with unexpected objects.
+#'
+#' @inheritParams type-predicates
+#' @export
+#' @examples
+#' # Primitive functions are not closures:
+#' is_closure(base::c)
+#' is_primitive(base::c)
+#'
+#' # On the other hand, internal functions are wrapped in a closure
+#' # and appear as such from the R side:
+#' is_closure(base::eval)
+#'
+#' # Both closures and primitives are functions:
+#' is_function(base::c)
+#' is_function(base::eval)
+#'
+#' # Primitive functions never appear in evaluation stacks:
+#' is_primitive(base::`[[`)
+#' is_primitive(base::list)
+#' list(ctxt_stack())[[1]]
+#'
+#' # While closures do:
+#' identity(identity(ctxt_stack()))
+is_function <- function(x) {
+  is_closure(x) || is_primitive(x)
+}
+
+#' @export
+#' @rdname is_function
+is_closure <- function(x) {
+  typeof(x) == "closure"
+}
+
+#' @export
+#' @rdname is_function
+is_primitive <- function(x) {
+  typeof(x) %in% c("builtin", "special")
+}
+#' @export
+#' @rdname is_function
+#' @examples
+#'
+#' # Many primitive functions evaluate arguments eagerly:
+#' is_primitive_eager(base::c)
+#' is_primitive_eager(base::list)
+#' is_primitive_eager(base::`+`)
+is_primitive_eager <- function(x) {
+  typeof(x) == "builtin"
+}
+#' @export
+#' @rdname is_function
+#' @examples
+#'
+#' # However, primitives that operate on expressions, like quote() or
+#' # substitute(), are lazy:
+#' is_primitive_lazy(base::quote)
+#' is_primitive_lazy(base::substitute)
+is_primitive_lazy <- function(x) {
+  typeof(x) == "special"
+}
+
+
+#' Return the closure environment of a function
+#'
+#' Closure environments define the scope of functions (see [env()]).
+#' When a function call is evaluated, R creates an evaluation frame
+#' (see [ctxt_stack()]) that inherits from the closure environment.
+#' This makes all objects defined in the closure environment and all
+#' its parents available to code executed within the function.
+#'
+#' `fn_env()` returns the closure environment of `fn`. There is also
+#' an assignment method to set a new closure environment.
+#'
+#' @param fn,x A function.
+#' @param value A new closure environment for the function.
+#' @export
+#' @examples
+#' env <- child_env("base")
+#' fn <- with_env(env, function() NULL)
+#' identical(fn_env(fn), env)
+#'
+#' other_env <- child_env("base")
+#' fn_env(fn) <- other_env
+#' identical(fn_env(fn), other_env)
+fn_env <- function(fn) {
+  if(!is_function(fn)) {
+    abort("`fn` is not a function", "type")
+  }
+  environment(fn)
+}
+
+#' @export
+#' @rdname fn_env
+`fn_env<-` <- function(x, value) {
+  if(!is_function(x)) {
+    abort("`fn` is not a function", "type")
+  }
+  environment(x) <- value
+  x
+}
+
+
+#' Convert to function or closure
+#'
+#' @description
+#'
+#' * `as_function()` transform objects to functions. It fetches
+#'   functions by name if supplied a string or transforms
+#'   [quosures][quosure] to a proper function.
+#'
+#' * `as_closure()` first passes its argument to `as_function()`. If
+#'   the result is a primitive function, it regularises it to a proper
+#'   [closure] (see [is_function()] about primitive functions).
+#'
+#' @param x A function or formula.
+#'
+#'   If a **function**, it is used as is.
+#'
+#'   If a **formula**, e.g. `~ .x + 2`, it is converted to a function
+#'   with two arguments, `.x` or `.` and `.y`. This allows you to
+#'   create very compact anonymous functions with up to two inputs.
+#' @param env Environment in which to fetch the function in case `x`
+#'   is a string.
+#' @export
+#' @examples
+#' f <- as_function(~ . + 1)
+#' f(10)
+#'
+#' # Primitive functions are regularised as closures
+#' as_closure(list)
+#' as_closure("list")
+#'
+#' # Operators have `.x` and `.y` as arguments, just like lambda
+#' # functions created with the formula syntax:
+#' as_closure(`+`)
+#' as_closure(`~`)
+as_function <- function(x, env = caller_env()) {
+  coerce_type(x, friendly_type("function"),
+    primitive = ,
+    closure = {
+      x
+    },
+    formula = {
+      if (length(x) > 2) {
+        abort("Can't convert a two-sided formula to a function")
+      }
+      args <- list(... = missing_arg(), .x = quote(..1), .y = quote(..2), . = quote(..1))
+      new_function(args, f_rhs(x), f_env(x))
+    },
+    string = {
+      get(x, envir = env, mode = "function")
+    }
+  )
+}
+#' @rdname as_function
+#' @export
+as_closure <- function(x, env = caller_env()) {
+  x <- as_function(x, env = env)
+  coerce_type(x, "a closure",
+    closure =
+      x,
+    primitive = {
+      fn_name <- prim_name(x)
+
+      fn <- op_as_closure(fn_name)
+      if (!is_null(fn)) {
+        return(fn)
+      }
+
+      if (fn_name == "eval") {
+        # do_eval() starts a context with a fake primitive function as
+        # function definition. We replace it here with the .Internal()
+        # wrapper of eval() so we can match the arguments.
+        fmls <- formals(base::eval)
+      } else {
+        fmls <- formals(.ArgsEnv[[fn_name]] %||% .GenericArgsEnv[[fn_name]])
+      }
+
+      args <- syms(names(fmls))
+      args <- set_names(args)
+      names(args)[(names(args) == "...")] <- ""
+
+      prim_call <- lang(fn_name, splice(args))
+      new_function(fmls, prim_call, base_env())
+    }
+  )
+}
+
+op_as_closure <- function(prim_nm) {
+  switch(prim_nm,
+    `<-` = ,
+    `<<-` = ,
+    `=` = function(.x, .y) {
+      op <- sym(prim_nm)
+      expr <- expr(UQ(op)(UQ(enexpr(.x)), UQ(enexpr(.y))))
+      eval_bare(expr, caller_env())
+    },
+    `@` = ,
+    `$` = function(.x, .i) {
+      op <- sym(prim_nm)
+      expr <- expr(UQ(op)(.x, `!!`(enexpr(.i))))
+      eval_bare(expr)
+    },
+    `[[<-` = ,
+    `@<-` = ,
+    `$<-` = function(.x, .i, .value) {
+      op <- sym(prim_nm)
+      expr <- expr(UQ(op)(UQ(enexpr(.x)), UQ(enexpr(.i)), UQ(enexpr(.value))))
+      eval_bare(expr, caller_env())
+    },
+    `[<-` = function(.x, ...) {
+      expr <- expr(`[<-`(!! enexpr(.x), !!! exprs(...)))
+      eval_bare(expr, caller_env())
+    },
+    `(` = function(.x) .x,
+    `[` = function(.x, ...) .x[...],
+    `[[` = function(.x, ...) .x[[...]],
+    `{` = function(...) {
+      values <- list(...)
+      values[[length(values)]]
+    },
+    `&` = function(.x, .y) .x & .y,
+    `|` = function(.x, .y) .x | .y,
+    `&&` = function(.x, .y) .x && .y,
+    `||` = function(.x, .y) .x || .y,
+    `!` = function(.x) !.x,
+    `+` = function(.x, .y) if (missing(.y)) .x else .x + .y,
+    `-` = function(.x, .y) if (missing(.y)) -.x else .x - .y,
+    `*` = function(.x, .y) .x * .y,
+    `/` = function(.x, .y) .x / .y,
+    `^` = function(.x, .y) .x ^ .y,
+    `%%` = function(.x, .y) .x %% .y,
+    `<` = function(.x, .y) .x < .y,
+    `<=` = function(.x, .y) .x <= .y,
+    `>` = function(.x, .y) .x > .y,
+    `>=` = function(.x, .y) .x >= .y,
+    `==` = function(.x, .y) .x == .y,
+    `!=` = function(.x, .y) .x != .y,
+    `:` = function(.x, .y) .x : .y,
+    `~` = function(.x, .y) {
+      if (is_missing(substitute(.y))) {
+        new_formula(NULL, substitute(.x), caller_env())
+      } else {
+        new_formula(substitute(.x), substitute(.y), caller_env())
+      }
+    },
+
+    # Unsupported primitives
+    `break` = ,
+    `for` = ,
+    `function` = ,
+    `if` = ,
+    `next` = ,
+    `repeat` = ,
+    `return` = ,
+    `while` = {
+      nm <- chr_quoted(prim_name)
+      abort(paste0("Can't coerce the primitive function ", nm, " to a closure"))
+    }
+  )
+}
diff --git a/R/formula.R b/R/formula.R
new file mode 100644
index 0000000..87a6360
--- /dev/null
+++ b/R/formula.R
@@ -0,0 +1,198 @@
+#' Create a formula
+#'
+#' @param lhs,rhs A call, name, or atomic vector.
+#' @param env An environment.
+#' @return A formula object.
+#' @seealso [new_quosure()]
+#' @export
+#' @examples
+#' new_formula(quote(a), quote(b))
+#' new_formula(NULL, quote(b))
+new_formula <- function(lhs, rhs, env = caller_env()) {
+  if (!is_env(env) && !is_null(env)) {
+    abort("`env` must be an environment")
+  }
+
+  if (is_null(lhs)) {
+    f <- lang("~", rhs)
+  } else {
+    f <- lang("~", lhs, rhs)
+  }
+
+  structure(f, class = "formula", .Environment = env)
+}
+
+#' Is object a formula?
+#'
+#' `is_formula()` tests if `x` is a call to `~`. `is_bare_formula()`
+#' tests in addition that `x` does not inherit from anything else than
+#' `"formula"`. `is_formulaish()` returns `TRUE` for both formulas and
+#' [definitions][is_definition] of the type `a := b`.
+#'
+#' The `scoped` argument patterns-match on whether the scoped bundled
+#' with the quosure is valid or not. Invalid scopes may happen in
+#' nested quotations like `~~expr`, where the outer quosure is validly
+#' scoped but not the inner one. This is because `~` saves the
+#' environment when it is evaluated, and quoted formulas are by
+#' definition not evaluated.
+#'
+#' @inheritParams is_quosure
+#' @param lhs A boolean indicating whether the [formula][is_formula]
+#'   or [definition][is_definition] has a left-hand side. If `NULL`,
+#'   the LHS is not inspected.
+#' @seealso [is_quosure()] and [is_quosureish()]
+#' @export
+#' @examples
+#' x <- disp ~ am
+#' is_formula(x)
+#'
+#' is_formula(~10)
+#' is_formula(10)
+#'
+#' is_formula(quo(foo))
+#' is_bare_formula(quo(foo))
+#'
+#' # Note that unevaluated formulas are treated as bare formulas even
+#' # though they don't inherit from "formula":
+#' f <- quote(~foo)
+#' is_bare_formula(f)
+#'
+#' # However you can specify `scoped` if you need the predicate to
+#' # return FALSE for these unevaluated formulas:
+#' is_bare_formula(f, scoped = TRUE)
+#' is_bare_formula(eval(f), scoped = TRUE)
+#'
+#'
+#' # There is also a variant that returns TRUE for definitions in
+#' # addition to formulas:
+#' is_formulaish(a ~ b)
+#' is_formulaish(a := b)
+is_formula <- function(x, scoped = NULL, lhs = NULL) {
+  if (!is_formulaish(x, scoped = scoped, lhs = lhs)) {
+    return(FALSE)
+  }
+  identical(node_car(x), sym_tilde)
+}
+#' @rdname is_formula
+#' @export
+is_bare_formula <- function(x, scoped = NULL, lhs = NULL) {
+  if (!is_formula(x, scoped = scoped, lhs = lhs)) {
+    return(FALSE)
+  }
+  class <- class(x)
+  is_null(class) || identical(class, "formula")
+}
+#' @rdname is_formula
+#' @export
+is_formulaish <- function(x, scoped = NULL, lhs = NULL) {
+  .Call(rlang_is_formulaish, x, scoped, lhs)
+}
+
+#' Get or set formula components
+#'
+#' `f_rhs` extracts the righthand side, `f_lhs` extracts the lefthand
+#' side, and `f_env` extracts the environment. All functions throw an
+#' error if `f` is not a formula.
+#'
+#' @param f,x A formula
+#' @param value The value to replace with.
+#' @export
+#' @return `f_rhs` and `f_lhs` return language objects (i.e.  atomic
+#'   vectors of length 1, a name, or a call). `f_env` returns an
+#'   environment.
+#' @examples
+#' f_rhs(~ 1 + 2 + 3)
+#' f_rhs(~ x)
+#' f_rhs(~ "A")
+#' f_rhs(1 ~ 2)
+#'
+#' f_lhs(~ y)
+#' f_lhs(x ~ y)
+#'
+#' f_env(~ x)
+f_rhs <- function(f) {
+  .Call(f_rhs_, f)
+}
+
+#' @export
+#' @rdname f_rhs
+`f_rhs<-` <- function(x, value) {
+  stopifnot(is_formula(x))
+  x[[length(x)]] <- value
+  x
+}
+
+#' @export
+#' @rdname f_rhs
+f_lhs <- function(f) {
+  .Call(f_lhs_, f)
+}
+
+#' @export
+#' @rdname f_rhs
+`f_lhs<-` <- function(x, value) {
+  stopifnot(is_formula(x))
+  if (length(x) < 3) {
+    x <- duplicate(x)
+    mut_node_cdr(x, pairlist(value, x[[2]]))
+  } else {
+    x[[2]] <- value
+  }
+  x
+}
+
+copy_lang_name <- function(f, x) {
+  f[[1]] <- x[[1]]
+  f
+}
+
+#' @export
+#' @rdname f_rhs
+f_env <- function(f) {
+  if(!is_formula(f)) {
+    abort("`f` must be a formula")
+  }
+  attr(f, ".Environment")
+}
+
+#' @export
+#' @rdname f_rhs
+`f_env<-` <- function(x, value) {
+  stopifnot(is_formula(x))
+  set_attrs(x, .Environment = value)
+}
+
+#' Turn RHS of formula into a string or label
+#'
+#' Equivalent of [expr_text()] and [expr_label()] for formulas.
+#'
+#' @param x A formula.
+#' @inheritParams expr_text
+#' @export
+#' @examples
+#' f <- ~ a + b + bc
+#' f_text(f)
+#' f_label(f)
+#'
+#' # Names a quoted with ``
+#' f_label(~ x)
+#' # Strings are encoded
+#' f_label(~ "a\nb")
+#' # Long expressions are collapsed
+#' f_label(~ foo({
+#'   1 + 2
+#'   print(x)
+#' }))
+f_text <- function(x, width = 60L, nlines = Inf) {
+  expr_text(f_rhs(x), width = width, nlines = nlines)
+}
+#' @rdname f_text
+#' @export
+f_name <- function(x) {
+  expr_name(f_rhs(x))
+}
+#' @rdname f_text
+#' @export
+f_label <- function(x) {
+  expr_label(f_rhs(x))
+}
diff --git a/R/operators.R b/R/operators.R
new file mode 100644
index 0000000..c5b4989
--- /dev/null
+++ b/R/operators.R
@@ -0,0 +1,90 @@
+#' Default value for `NULL`
+#'
+#' This infix function makes it easy to replace `NULL`s with a default
+#' value. It's inspired by the way that Ruby's or operation (`||`)
+#' works.
+#'
+#' @param x,y If `x` is NULL, will return `y`; otherwise returns `x`.
+#' @export
+#' @name op-null-default
+#' @examples
+#' 1 %||% 2
+#' NULL %||% 2
+`%||%` <- function(x, y) {
+  if (is_null(x)) y else x
+}
+
+#' Replace missing values
+#'
+#' This infix function is similar to \code{\%||\%} but is vectorised
+#' and provides a default value for missing elements. It is faster
+#' than using [base::ifelse()] and does not perform type conversions.
+#'
+#' @param x,y `y` for elements of `x` that are NA; otherwise, `x`.
+#' @export
+#' @name op-na-default
+#' @seealso [op-null-default]
+#' @examples
+#' c("a", "b", NA, "c") %|% "default"
+`%|%` <- function(x, y) {
+  stopifnot(is_atomic(x) && is_scalar_atomic(y))
+  stopifnot(typeof(x) == typeof(y))
+  .Call(rlang_replace_na, x, y)
+}
+
+#' Infix attribute accessor
+#'
+#' @param x Object
+#' @param name Attribute name
+#' @export
+#' @name op-get-attr
+#' @examples
+#' factor(1:3) %@% "levels"
+#' mtcars %@% "class"
+`%@%` <- function(x, name) {
+  attr(x, name, exact = TRUE)
+}
+
+#' Definition operator
+#'
+#' The definition operator is typically used in DSL packages like
+#' `ggvis` and `data.table`. It is exported in rlang as a alias to
+#' `~`. This makes it a quoting operator that can be shared between
+#' packages for computing on the language. Since it effectively
+#' creates formulas, it is immediately compatible with rlang's
+#' formulas and interpolation features.
+#'
+#' @export
+#' @examples
+#' # This is useful to provide an alternative way of specifying
+#' # arguments in DSLs:
+#' fn <- function(...) ..1
+#' f <- fn(arg := foo(bar) + baz)
+#'
+#' is_formula(f)
+#' f_lhs(f)
+#' f_rhs(f)
+#' @name op-definition
+`:=` <- `~`
+
+#' @rdname op-definition
+#' @param x An object to test.
+#' @export
+#' @examples
+#'
+#' # A predicate is provided to distinguish formulas from the
+#' # colon-equals operator:
+#' is_definition(a := b)
+#' is_definition(a ~ b)
+is_definition <- function(x) {
+  is_formulaish(x) && identical(node_car(x), sym_def)
+}
+
+#' @rdname op-definition
+#' @export
+#' @param lhs,rhs Expressions for the LHS and RHS of the definition.
+#' @param env The evaluation environment bundled with the definition.
+new_definition <- function(lhs, rhs, env = caller_env()) {
+  def <- new_formula(lhs, rhs, env)
+  mut_node_car(def, sym_def)
+}
diff --git a/R/parse.R b/R/parse.R
new file mode 100644
index 0000000..693957c
--- /dev/null
+++ b/R/parse.R
@@ -0,0 +1,90 @@
+#' Parse R code
+#'
+#' These functions parse and transform text into R expressions. This
+#' is the first step to interpret or evaluate a piece of R code
+#' written by a programmer.
+#'
+#' `parse_expr()` returns one expression. If the text contains more
+#' than one expression (separated by colons or new lines), an error is
+#' issued. On the other hand `parse_exprs()` can handle multiple
+#' expressions. It always returns a list of expressions (compare to
+#' [base::parse()] which returns an base::expression vector). All
+#' functions also support R connections.
+#'
+#' The versions prefixed with `f_` return expressions quoted in
+#' formulas rather than raw expressions.
+#'
+#' @param x Text containing expressions to parse_expr for
+#'   `parse_expr()` and `parse_exprs()`. Can also be an R connection,
+#'   for instance to a file. If the supplied connection is not open,
+#'   it will be automatically closed and destroyed.
+#' @param env The environment for the formulas. Defaults to the
+#'   context in which the parse_expr function was called. Can be any
+#'   object with a `as_env()` method.
+#' @return `parse_expr()` returns a formula, `parse_exprs()` returns a
+#'   list of formulas.
+#' @seealso [base::parse()]
+#' @export
+#' @examples
+#' # parse_expr() can parse_expr any R expression:
+#' parse_expr("mtcars %>% dplyr::mutate(cyl_prime = cyl / sd(cyl))")
+#'
+#' # A string can contain several expressions separated by ; or \n
+#' parse_exprs("NULL; list()\n foo(bar)")
+#'
+#' # The versions suffixed with _f return formulas:
+#' parse_quosure("foo %>% bar()")
+#' parse_quosures("1; 2; mtcars")
+#'
+#' # The env argument is passed to as_env(). It can be e.g. a string
+#' # representing a scoped package environment:
+#' parse_quosure("identity(letters)", env = empty_env())
+#' parse_quosures("identity(letters); mtcars", env = "base")
+#'
+#'
+#' # You can also parse source files by passing a R connection. Let's
+#' # create a file containing R code:
+#' path <- tempfile("my-file.R")
+#' cat("1; 2; mtcars", file = path)
+#'
+#' # We can now parse it by supplying a connection:
+#' parse_exprs(file(path))
+parse_expr <- function(x) {
+  exprs <- parse_exprs(x)
+
+  n <- length(exprs)
+  if (n == 0) {
+    abort("No expression to parse_expr")
+  } else if (n > 1) {
+    abort("More than one expression parsed_expr")
+  }
+
+  exprs[[1]]
+}
+#' @rdname parse_expr
+#' @export
+parse_exprs <- function(x) {
+  if (inherits(x, "connection")) {
+    if (!isOpen(x)) {
+      open(x)
+      on.exit(close(x))
+    }
+    exprs <- parse(file = x)
+  } else if (is_scalar_character(x)) {
+    exprs <- parse(text = x)
+  } else {
+    abort("`x` must be a string or a R connection")
+  }
+  as.list(exprs)
+}
+
+#' @rdname parse_expr
+#' @export
+parse_quosure <- function(x, env = caller_env()) {
+  new_quosure(parse_expr(x), as_env(env))
+}
+#' @rdname parse_expr
+#' @export
+parse_quosures <- function(x, env = caller_env()) {
+  map(parse_exprs(x), new_quosure, env = as_env(env))
+}
diff --git a/R/quo-unquote.R b/R/quo-unquote.R
new file mode 100644
index 0000000..957e6e8
--- /dev/null
+++ b/R/quo-unquote.R
@@ -0,0 +1,185 @@
+#' Quasiquotation of an expression
+#'
+#' @description
+#'
+#' Quasiquotation is the mechanism that makes it possible to program
+#' flexibly with
+#' [tidyeval](http://rlang.tidyverse.org/articles/tidy-evaluation.html)
+#' grammars like dplyr. It is enabled in all tidyeval functions, the
+#' most fundamental of which are [quo()] and [expr()].
+#'
+#' Quasiquotation is the combination of quoting an expression while
+#' allowing immediate evaluation (unquoting) of part of that
+#' expression. We provide both syntactic operators and functional
+#' forms for unquoting.
+#'
+#' - `UQ()` and the `!!` operator unquote their argument. It gets
+#'   evaluated immediately in the surrounding context.
+#'
+#' - `UQE()` is like `UQ()` but retrieves the expression of
+#'   [quosureish][is_quosureish] objects. It is a shortcut for `!!
+#'   get_expr(x)`. Use this with care: it is potentially unsafe to
+#'   discard the environment of the quosure.
+#'
+#' - `UQS()` and the `!!!` operators unquote and splice their
+#'   argument. The argument should evaluate to a vector or an
+#'   expression. Each component of the vector is embedded as its own
+#'   argument in the surrounding call. If the vector is named, the
+#'   names are used as argument names.
+#'
+#' @section Theory:
+#'
+#' Formally, `quo()` and `expr()` are quasiquote functions, `UQ()` is
+#' the unquote operator, and `UQS()` is the unquote splice operator.
+#' These terms have a rich history in Lisp languages, and live on in
+#' modern languages like
+#' [Julia](https://docs.julialang.org/en/stable/manual/metaprogramming/)
+#' and
+#' [Racket](https://docs.racket-lang.org/reference/quasiquote.html).
+#'
+#' @param x An expression to unquote.
+#' @name quasiquotation
+#' @aliases UQ UQE UQS
+#' @examples
+#' # Quasiquotation functions act like base::quote()
+#' quote(foo(bar))
+#' expr(foo(bar))
+#' quo(foo(bar))
+#'
+#' # In addition, they support unquoting:
+#' expr(foo(UQ(1 + 2)))
+#' expr(foo(!! 1 + 2))
+#' quo(foo(!! 1 + 2))
+#'
+#' # The !! operator is a handy syntactic shortcut for unquoting with
+#' # UQ().  However you need to be a bit careful with operator
+#' # precedence. All arithmetic and comparison operators bind more
+#' # tightly than `!`:
+#' quo(1 +  !! (1 + 2 + 3) + 10)
+#'
+#' # For this reason you should always wrap the unquoted expression
+#' # with parentheses when operators are involved:
+#' quo(1 + (!! 1 + 2 + 3) + 10)
+#'
+#' # Or you can use the explicit unquote function:
+#' quo(1 + UQ(1 + 2 + 3) + 10)
+#'
+#'
+#' # Use !!! or UQS() if you want to add multiple arguments to a
+#' # function It must evaluate to a list
+#' args <- list(1:10, na.rm = TRUE)
+#' quo(mean( UQS(args) ))
+#'
+#' # You can combine the two
+#' var <- quote(xyz)
+#' extra_args <- list(trim = 0.9, na.rm = TRUE)
+#' quo(mean(UQ(var) , UQS(extra_args)))
+#'
+#'
+#' # Unquoting is especially useful for transforming successively a
+#' # captured expression:
+#' quo <- quo(foo(bar))
+#' quo <- quo(inner(!! quo, arg1))
+#' quo <- quo(outer(!! quo, !!! syms(letters[1:3])))
+#' quo
+#'
+#' # Since we are building the expression in the same environment, you
+#' # can also start with raw expressions and create a quosure in the
+#' # very last step to record the dynamic environment:
+#' expr <- expr(foo(bar))
+#' expr <- expr(inner(!! expr, arg1))
+#' quo <- quo(outer(!! expr, !!! syms(letters[1:3])))
+#' quo
+NULL
+
+#' @export
+#' @rdname quasiquotation
+UQ <- function(x) {
+  x
+}
+#' @export
+#' @rdname quasiquotation
+UQE <- function(x) {
+  if (is_quosureish(x)) {
+    get_expr(x)
+  } else {
+    x
+  }
+}
+#' @export
+#' @rdname quasiquotation
+`!!` <- UQE
+#' @export
+#' @rdname quasiquotation
+UQS <- function(x) {
+  if (is_pairlist(x) || is_null(x)) {
+    x
+  } else if (is_vector(x)) {
+    as.pairlist(x)
+  } else if (identical(node_car(x), sym_curly)) {
+    node_cdr(x)
+  } else if (is_expr(x)) {
+    pairlist(x)
+  } else {
+    abort("`x` must be a vector or a language object")
+  }
+}
+#' @export
+#' @rdname quasiquotation
+#' @usage NULL
+`!!!` <- UQS
+
+#' Process unquote operators in a captured expression
+#'
+#' While all capturing functions in the tidy evaluation framework
+#' perform unquote on capture (most notably [quo()]),
+#' `expr_interp()` manually processes unquoting operators in
+#' expressions that are already captured. `expr_interp()` should be
+#' called in all user-facing functions expecting a formula as argument
+#' to provide the same quasiquotation functionality as NSE functions.
+#'
+#' @param x A function, raw expression, or formula to interpolate.
+#' @param env The environment in which unquoted expressions should be
+#'   evaluated. By default, the formula or closure environment if a
+#'   formula or a function, or the current environment otherwise.
+#' @export
+#' @examples
+#' # All tidy NSE functions like quo() unquote on capture:
+#' quo(list(!! 1 + 2))
+#'
+#' # expr_interp() is meant to provide the same functionality when you
+#' # have a formula or expression that might contain unquoting
+#' # operators:
+#' f <- ~list(!! 1 + 2)
+#' expr_interp(f)
+#'
+#' # Note that only the outer formula is unquoted (which is a reason
+#' # to use expr_interp() as early as possible in all user-facing
+#' # functions):
+#' f <- ~list(~!! 1 + 2, !! 1 + 2)
+#' expr_interp(f)
+#'
+#'
+#' # Another purpose for expr_interp() is to interpolate a closure's
+#' # body. This is useful to inline a function within another. The
+#' # important limitation is that all formal arguments of the inlined
+#' # function should be defined in the receiving function:
+#' other_fn <- function(x) toupper(x)
+#'
+#' fn <- expr_interp(function(x) {
+#'   x <- paste0(x, "_suffix")
+#'   !!! body(other_fn)
+#' })
+#' fn
+#' fn("foo")
+expr_interp <- function(x, env = NULL) {
+  if (is_formula(x)) {
+    expr <- .Call(rlang_interp, f_rhs(x), env %||% f_env(x), TRUE)
+    x <- new_quosure(expr, f_env(x))
+  } else if (is_closure(x)) {
+    body(x) <- .Call(rlang_interp, body(x), env %||% fn_env(x), TRUE)
+  } else {
+    x <- .Call(rlang_interp, x, env %||% parent.frame(), TRUE)
+  }
+  x
+}
diff --git a/R/quo.R b/R/quo.R
new file mode 100644
index 0000000..324ac07
--- /dev/null
+++ b/R/quo.R
@@ -0,0 +1,493 @@
+#' Create quosures
+#'
+#' @description
+#'
+#' Quosures are quoted [expressions][is_expr] that keep track of an
+#' [environment][env] (just like [closure
+#' functions](http://adv-r.had.co.nz/Functional-programming.html#closures)).
+#' They are implemented as a subclass of one-sided formulas. They are
+#' an essential piece of the tidy evaluation framework.
+#'
+#' - `quo()` quotes its input (i.e. captures R code without
+#'   evaluation), captures the current environment, and bundles them
+#'   in a quosure.
+#'
+#' - `enquo()` takes a symbol referring to a function argument, quotes
+#'   the R code that was supplied to this argument, captures the
+#'   environment where the function was called (and thus where the R
+#'   code was typed), and bundles them in a quosure.
+#'
+#' - [quos()] is a bit different to other functions as it returns a
+#'   list of quosures. You can supply several expressions directly,
+#'   e.g. `quos(foo, bar)`, but more importantly you can also supply
+#'   dots: `quos(...)`. In the latter case, expressions forwarded
+#'   through dots are captured and transformed to quosures. The
+#'   environments bundled in those quosures are the ones where the
+#'   code was supplied as arguments, even if the dots were forwarded
+#'   multiple times across several function calls.
+#'
+#' - `new_quosure()` is the only constructor that takes its arguments
+#'   by value. It lets you create a quosure from an expression and an
+#'   environment.
+#'
+#' @section Role of quosures for tidy evaluation:
+#'
+#' Quosures play an essential role thanks to these features:
+#'
+#' - They allow consistent scoping of quoted expressions by recording
+#'   an expression along with its local environment.
+#'
+#' - `quo()`, `quos()` and `enquo()` all support [quasiquotation]. By
+#'   unquoting other quosures, you can safely combine expressions even
+#'   when they come from different contexts. You can also unquote
+#'   values and raw expressions depending on your needs.
+#'
+#' - Unlike formulas, quosures self-evaluate (see [eval_tidy()])
+#'   within their own environment, which is why you can unquote a
+#'   quosure inside another quosure and evaluate it like you've
+#'   unquoted a raw expression.
+#'
+#' See the [programming with
+#' dplyr](http://dplyr.tidyverse.org/articles/programming.html)
+#' vignette for practical examples. For developers, the [tidy
+#' evaluation](http://rlang.tidyverse.org/articles/tidy-evaluation.html)
+#' vignette provides an overview of this approach. The
+#' [quasiquotation] page goes in detail over the unquoting and
+#' splicing operators.
+#'
+#' @param expr An expression.
+#' @param arg A symbol referring to an argument. The expression
+#'   supplied to that argument will be captured unevaluated.
+#' @return A formula whose right-hand side contains the quoted
+#'   expression supplied as argument.
+#' @seealso [expr()] for quoting a raw expression with quasiquotation.
+#'   The [quasiquotation] page goes over unquoting and splicing.
+#' @export
+#' @examples
+#' # quo() is a quotation function just like expr() and quote():
+#' expr(mean(1:10 * 2))
+#' quo(mean(1:10 * 2))
+#'
+#' # It supports quasiquotation and allows unquoting (evaluating
+#' # immediately) part of the quoted expression:
+#' quo(mean(!! 1:10 * 2))
+#'
+#' # What makes quo() often safer to use than quote() and expr() is
+#' # that it keeps track of the contextual environment. This is
+#' # especially important if you're referring to local variables in
+#' # the expression:
+#' var <- "foo"
+#' quo <- quo(var)
+#' quo
+#'
+#' # Here `quo` quotes `var`. Let's check that it also captures the
+#' # environment where that symbol is defined:
+#' identical(get_env(quo), get_env())
+#' env_has(quo, "var")
+#'
+#'
+#' # Keeping track of the environment is important when you quote an
+#' # expression in a context (that is, a particular function frame)
+#' # and pass it around to other functions (which will be run in their
+#' # own evaluation frame):
+#' fn <- function() {
+#'   foobar <- 10
+#'   quo(foobar * 2)
+#' }
+#' quo <- fn()
+#' quo
+#'
+#' # `foobar` is not defined here but was defined in `fn()`'s
+#' # evaluation frame. However, the quosure keeps track of that frame
+#' # and is safe to evaluate:
+#' eval_tidy(quo)
+#'
+#'
+#' # Like other formulas, quosures are normally self-quoting under
+#' # evaluation:
+#' eval(~var)
+#' eval(quo(var))
+#'
+#' # But eval_tidy() evaluates expressions in a special environment
+#' # (called the overscope) where they become promises. They
+#' # self-evaluate under evaluation:
+#' eval_tidy(~var)
+#' eval_tidy(quo(var))
+#'
+#' # Note that it's perfectly fine to unquote quosures within
+#' # quosures, as long as you evaluate with eval_tidy():
+#' quo <- quo(letters)
+#' quo <- quo(toupper(!! quo))
+#' quo
+#' eval_tidy(quo)
+#'
+#'
+#' # Quoting as a quosure is necessary to preserve scope information
+#' # and make sure objects are looked up in the right place. However,
+#' # there are situations where it can get in the way. This is the
+#' # case when you deal with non-tidy NSE functions that do not
+#' # understand formulas. You can inline the RHS of a formula in a
+#' # call thanks to the UQE() operator:
+#' nse_function <- function(arg) substitute(arg)
+#' var <- locally(quo(foo(bar)))
+#' quo(nse_function(UQ(var)))
+#' quo(nse_function(UQE(var)))
+#'
+#' # This is equivalent to unquoting and taking the RHS:
+#' quo(nse_function(!! get_expr(var)))
+#'
+#' # One of the most important old-style NSE function is the dollar
+#' # operator. You need to use UQE() for subsetting with dollar:
+#' var <- quo(cyl)
+#' quo(mtcars$UQE(var))
+#'
+#' # `!!`() is also treated as a shortcut. It is meant for situations
+#' # where the bang operator would not parse, such as subsetting with
+#' # $. Since that's its main purpose, we've made it a shortcut for
+#' # UQE() rather than UQ():
+#' var <- quo(cyl)
+#' quo(mtcars$`!!`(var))
+#'
+#'
+#' # When a quosure is printed in the console, the brackets indicate
+#' # if the enclosure is the global environment or a local one:
+#' locally(quo(foo))
+#'
+#' # Literals are enquosed with the empty environment because they can
+#' # be evaluated anywhere. The brackets indicate "empty":
+#' quo(10L)
+#'
+#' # To differentiate local environments, use str(). It prints the
+#' # machine address of the environment:
+#' quo1 <- locally(quo(foo))
+#' quo2 <- locally(quo(foo))
+#' quo1; quo2
+#' str(quo1); str(quo2)
+#'
+#' # You can also see this address by printing the environment at the
+#' # console:
+#' get_env(quo1)
+#' get_env(quo2)
+#'
+#'
+#' # new_quosure() takes by value an expression that is already quoted:
+#' expr <- quote(mtcars)
+#' env <- as_env("datasets")
+#' quo <- new_quosure(expr, env)
+#' quo
+#' eval_tidy(quo)
+#' @name quosure
+quo <- function(expr) {
+  enquo(expr)
+}
+#' @rdname quosure
+#' @inheritParams as_quosure
+#' @export
+new_quosure <- function(expr, env = caller_env()) {
+  quo <- new_formula(NULL, expr, env)
+  set_attrs(quo, class = c("quosure", "formula"))
+}
+#' @rdname quosure
+#' @export
+enquo <- function(arg) {
+  if (missing(arg)) {
+    return(new_quosure(missing_arg(), empty_env()))
+  }
+
+  capture <- lang(captureArg, substitute(arg))
+  arg <- eval_bare(capture, caller_env())
+  expr <- .Call(rlang_interp, arg$expr, arg$env, TRUE)
+  forward_quosure(expr, arg$env)
+}
+forward_quosure <- function(expr, env) {
+  if (is_quosure(expr)) {
+    expr
+  } else if (is_definition(expr)) {
+    as_quosureish(expr, env)
+  } else if (is_symbolic(expr)) {
+    new_quosure(expr, env)
+  } else {
+    new_quosure(expr, empty_env())
+  }
+}
+
+#' @export
+print.quosure <- function(x, ...) {
+  cat(paste0("<quosure: ", env_type(get_env(x)), ">\n"))
+  print(set_attrs(x, NULL))
+  invisible(x)
+}
+#' @export
+str.quosure <- function(object, ...) {
+  env_type <- env_format(get_env(object))
+
+  cat(paste0("<quosure: ", env_type, ">\n"))
+  print(set_attrs(object, NULL))
+  invisible(object)
+}
+
+#' Is an object a quosure or quosure-like?
+#'
+#' @description
+#'
+#' These predicates test for [quosure] objects.
+#'
+#' - `is_quosure()` tests for a tidyeval quosure. These are one-sided
+#'   formulas with a `quosure` class.
+#'
+#' - `is_quosureish()` tests for general R quosure objects: quosures,
+#'   or one-sided formulas.
+#'
+#' @param x An object to test.
+#' @param scoped A boolean indicating whether the quosure or formula
+#'   is scoped, that is, has a valid environment attribute. If `NULL`,
+#'   the scope is not inspected.
+#' @seealso [is_formula()] and [is_formulaish()]
+#' @export
+#' @examples
+#' # Quosures are created with quo():
+#' quo(foo)
+#' is_quosure(quo(foo))
+#'
+#' # Formulas look similar to quosures but are not quosures:
+#' is_quosure(~foo)
+#'
+#' # But they are quosureish:
+#' is_quosureish(~foo)
+#'
+#' # Note that two-sided formulas are never quosureish:
+#' is_quosureish(a ~ b)
+is_quosure <- function(x) {
+  inherits(x, "quosure")
+}
+#' @rdname is_quosure
+#' @export
+is_quosureish <- function(x, scoped = NULL) {
+  is_formula(x, scoped = scoped, lhs = FALSE)
+}
+is_one_sided <- function(x, lang_sym = sym_tilde) {
+  typeof(x) == "language" &&
+    identical(node_car(x), lang_sym) &&
+    is_null(node_cadr(node_cdr(x)))
+}
+
+#' Coerce object to quosure
+#'
+#' @description
+#'
+#' Quosure objects wrap an [expression][is_expr] with a [lexical
+#' enclosure][env]. This is a powerful quoting (see [base::quote()]
+#' and [quo()]) mechanism that makes it possible to carry and
+#' manipulate expressions while making sure that its symbolic content
+#' (symbols and named calls, see [is_symbolic()]) is correctly looked
+#' up during evaluation.
+#'
+#' - `new_quosure()` creates a quosure from a raw expression and an
+#'   environment.
+#'
+#' - `as_quosure()` is useful for functions that expect quosures but
+#'   allow specifying a raw expression as well. It has two possible
+#'   effects: if `x` is not a quosure, it wraps it into a quosure
+#'   bundling `env` as scope. If `x` is an unscoped quosure (see
+#'   [is_quosure()]), `env` is used as a default scope. On the other
+#'   hand if `x` has a valid enclosure, it is returned as is (even if
+#'   `env` is not the same as the formula environment).
+#'
+#' - While `as_quosure()` always returns a quosure (a one-sided
+#'   formula), even when its input is a [formula][new_formula] or a
+#'   [definition][op-definition], `as_quosureish()` returns quosureish
+#'   inputs as is.
+#'
+#' @param x An object to convert.
+#' @param env An environment specifying the lexical enclosure of the
+#'   quosure.
+#' @seealso [is_quosure()]
+#' @export
+#' @examples
+#' # Sometimes you get unscoped formulas because of quotation:
+#' f <- ~~expr
+#' inner_f <- f_rhs(f)
+#' str(inner_f)
+#' is_quosureish(inner_f, scoped = TRUE)
+#'
+#' # You can use as_quosure() to provide a default environment:
+#' as_quosure(inner_f, base_env())
+#'
+#' # Or convert expressions or any R object to a validly scoped quosure:
+#' as_quosure(quote(expr), base_env())
+#' as_quosure(10L, base_env())
+#'
+#'
+#' # While as_quosure() always returns a quosure (one-sided formula),
+#' # as_quosureish() returns quosureish objects:
+#' as_quosure(a := b)
+#' as_quosureish(a := b)
+#' as_quosureish(10L)
+as_quosure <- function(x, env = caller_env()) {
+  if (is_quosure(x)) {
+    x
+  } else if (is_bare_formula(x)) {
+    new_quosure(f_rhs(x), f_env(x) %||% env)
+  } else if (is_symbolic(x)) {
+    new_quosure(x, env)
+  } else {
+    new_quosure(x, empty_env())
+  }
+}
+#' @rdname as_quosure
+#' @export
+as_quosureish <- function(x, env = caller_env()) {
+  if (is_quosureish(x)) {
+    if (!is_env(f_env(x))) {
+      f_env(x) <- env
+    }
+    x
+  } else if (is_frame(x)) {
+    new_quosure(x$expr, sys_frame(x$caller_pos))
+  } else {
+    new_quosure(get_expr(x), get_env(x, env))
+  }
+}
+
+#' Is a quosure quoting a symbolic, missing or NULL object?
+#'
+#' These functions examine the expression of a quosure with a
+#' predicate.
+#'
+#' @section Empty quosures:
+#'
+#' When missing arguments are captured as quosures, either through
+#' [enquo()] or [quos()], they are returned as an empty quosure. These
+#' quosures contain the [missing argument][missing_arg] and typically
+#' have the [empty environment][empty_env] as enclosure.
+#'
+#' @param quo A quosure.
+#' @examples
+#' quo_is_symbol(quo(sym))
+#' quo_is_symbol(quo(foo(bar)))
+#'
+#' # You can create empty quosures by calling quo() without input:
+#' quo <- quo()
+#' quo_is_missing(quo)
+#' is_missing(f_rhs(quo))
+#' @name quo-predicates
+NULL
+
+#' @rdname quo-predicates
+#' @export
+quo_is_missing <- function(quo) {
+  is_missing(f_rhs(quo))
+}
+#' @rdname quo-predicates
+#' @export
+quo_is_symbol <- function(quo) {
+  is_symbol(f_rhs(quo))
+}
+#' @rdname quo-predicates
+#' @export
+quo_is_lang <- function(quo) {
+  is_lang(f_rhs(quo))
+}
+#' @rdname quo-predicates
+#' @export
+quo_is_symbolic <- function(quo) {
+  is_symbolic(f_rhs(quo))
+}
+#' @rdname quo-predicates
+#' @export
+quo_is_null <- function(quo) {
+  is_null(f_rhs(quo))
+}
+
+
+#' Splice a quosure and format it into string or label
+#'
+#' `quo_expr()` flattens all quosures within an expression. I.e., it
+#' turns `~foo(~bar(), ~baz)` to `foo(bar(), baz)`. `quo_text()` and
+#' `quo_label()` are equivalent to [f_text()], [expr_label()], etc,
+#' but they first splice their argument using `quo_expr()`.
+#' `quo_name()` transforms a quoted symbol to a string. It adds a bit
+#' more intent and type checking than simply calling `quo_text()` on
+#' the quoted symbol (which will work but won't return an error if not
+#' a symbol).
+#'
+#' @inheritParams expr_label
+#' @param quo A quosure or expression.
+#' @param warn Whether to warn if the quosure contains other quosures
+#'   (those will be collapsed).
+#' @export
+#' @seealso [expr_label()], [f_label()]
+#' @examples
+#' quo <- quo(foo(!! quo(bar)))
+#' quo
+#'
+#' # quo_expr() unwraps all quosures and returns a raw expression:
+#' quo_expr(quo)
+#'
+#' # This is used by quo_text() and quo_label():
+#' quo_text(quo)
+#'
+#' # Compare to the unwrapped expression:
+#' expr_text(quo)
+#'
+#' # quo_name() is helpful when you need really short labels:
+#' quo_name(quo(sym))
+#' quo_name(quo(!! sym))
+quo_expr <- function(quo, warn = FALSE) {
+  # Never warn when unwrapping outer quosure
+  if (is_quosure(quo)) {
+    quo <- f_rhs(quo)
+  }
+  quo_splice(duplicate(quo), warn = warn)
+}
+#' @rdname quo_expr
+#' @export
+quo_label <- function(quo) {
+  expr_label(quo_expr(quo))
+}
+#' @rdname quo_expr
+#' @export
+quo_text <- function(quo, width = 60L, nlines = Inf) {
+  expr_text(quo_expr(quo), width = width, nlines = nlines)
+}
+#' @rdname quo_expr
+#' @export
+quo_name <- function(quo) {
+  expr_name(quo_expr(quo))
+}
+
+quo_splice <- function(x, parent = NULL, warn = FALSE) {
+  switch_expr(x,
+    language = {
+      if (is_quosure(x)) {
+        if (!is_false(warn)) {
+          if (is_string(warn)) {
+            msg <- warn
+          } else {
+            msg <- "Collapsing inner quosure"
+          }
+          warn(msg)
+          warn <- FALSE
+        }
+
+        while (is_quosure(x)) {
+          x <- f_rhs(x)
+        }
+        if (!is_null(parent)) {
+          mut_node_car(parent, x)
+        }
+        quo_splice(x, parent, warn = warn)
+      } else {
+        quo_splice(node_cdr(x), warn = warn)
+      }
+    },
+    pairlist = {
+      while(!is_null(x)) {
+        quo_splice(node_car(x), x, warn = warn)
+        x <- node_cdr(x)
+      }
+    }
+  )
+
+  x
+}
diff --git a/R/quos.R b/R/quos.R
new file mode 100644
index 0000000..b9bdc4d
--- /dev/null
+++ b/R/quos.R
@@ -0,0 +1,133 @@
+#' Tidy quotation of multiple expressions and dots
+#'
+#' `quos()` quotes its arguments and returns them as a list of
+#' quosures (see [quo()]). It is especially useful to capture
+#' arguments forwarded through `...`.
+#'
+#' Both `quos` and `dots_definitions()` have specific support for
+#' definition expressions of the type `var := expr`, with some
+#' differences:
+#'
+#'\describe{
+#'  \item{`quos()`}{
+#'    When `:=` definitions are supplied to `quos()`, they are treated
+#'    as a synonym of argument assignment `=`. On the other hand, they
+#'    allow unquoting operators on the left-hand side, which makes it
+#'    easy to assign names programmatically.}
+#'  \item{`dots_definitions()`}{
+#'    This dots capturing function returns definitions as is. Unquote
+#'    operators are processed on capture, in both the LHS and the
+#'    RHS. Unlike `quos()`, it allows named definitions.}
+#' }
+#' @param ... Expressions to capture unevaluated.
+#' @inheritParams dots_values
+#' @param .named Whether to ensure all dots are named. Unnamed
+#'   elements are processed with [expr_text()] to figure out a default
+#'   name. If an integer, it is passed to the `width` argument of
+#'   `expr_text()`, if `TRUE`, the default width is used. See
+#'   [exprs_auto_name()].
+#' @export
+#' @name quosures
+#' @examples
+#' # quos() is like the singular version but allows quoting
+#' # several arguments:
+#' quos(foo(), bar(baz), letters[1:2], !! letters[1:2])
+#'
+#' # It is most useful when used with dots. This allows quoting
+#' # expressions across different levels of function calls:
+#' fn <- function(...) quos(...)
+#' fn(foo(bar), baz)
+#'
+#' # Note that quos() does not check for duplicate named
+#' # arguments:
+#' fn <- function(...) quos(x = x, ...)
+#' fn(x = a + b)
+#'
+#'
+#' # Dots can be spliced in:
+#' args <- list(x = 1:3, y = ~var)
+#' quos(!!! args, z = 10L)
+#'
+#' # Raw expressions are turned to formulas:
+#' args <- alist(x = foo, y = bar)
+#' quos(!!! args)
+#'
+#'
+#' # Definitions are treated similarly to named arguments:
+#' quos(x := expr, y = expr)
+#'
+#' # However, the LHS of definitions can be unquoted. The return value
+#' # must be a symbol or a string:
+#' var <- "foo"
+#' quos(!!var := expr)
+#'
+#' # If you need the full LHS expression, use dots_definitions():
+#' dots <- dots_definitions(var = foo(baz) := bar(baz))
+#' dots$defs
+quos <- function(..., .named = FALSE,
+                 .ignore_empty = c("trailing", "none", "all")) {
+  dots <- dots_enquose(...)
+  dots <- dots_clean_empty(dots, quo_is_missing, .ignore_empty)
+
+  if (.named) {
+    width <- quo_names_width(.named)
+    dots <- quos_auto_name(dots, width)
+  }
+  set_attrs(dots, class = "quosures")
+}
+
+#' @rdname quosures
+#' @param x An object to test.
+#' @export
+is_quosures <- function(x) {
+  inherits(x, "quosures")
+}
+#' @export
+`[.quosures` <- function(x, i) {
+  set_attrs(NextMethod(), class = "quosures")
+}
+#' @export
+c.quosures <- function(..., recursive = FALSE) {
+  structure(NextMethod(), class = "quosures")
+}
+
+quo_names_width <- function(named) {
+  if (is_true(named)) {
+    60L
+  } else if (is_scalar_integerish(named)) {
+    named
+  } else {
+    abort("`.named` must be a scalar logical or a numeric")
+  }
+}
+
+#' Ensure that list of expressions are all named
+#'
+#' This gives default names to unnamed elements of a list of
+#' expressions (or expression wrappers such as formulas or tidy
+#' quotes). `exprs_auto_name()` deparses the expressions with
+#' [expr_text()] by default. `quos_auto_name()` deparses with
+#' [quo_text()].
+#'
+#' @param exprs A list of expressions.
+#' @param width Maximum width of names.
+#' @param printer A function that takes an expression and converts it
+#'   to a string. This function must take an expression as first
+#'   argument and `width` as second argument.
+#' @export
+exprs_auto_name <- function(exprs, width = 60L, printer = expr_text) {
+  have_name <- have_name(exprs)
+
+  if (any(!have_name)) {
+    nms <- map_chr(exprs[!have_name], printer, width = width)
+    names(exprs)[!have_name] <- nms
+  }
+
+  exprs
+}
+#' @rdname exprs_auto_name
+#' @param quos A list of quosures.
+#' @export
+quos_auto_name <- function(quos, width = 60L) {
+  exprs_auto_name(quos, width = width, printer = quo_text)
+}
diff --git a/R/rlang.R b/R/rlang.R
new file mode 100644
index 0000000..9f5ddf1
--- /dev/null
+++ b/R/rlang.R
@@ -0,0 +1,2 @@
+#' @useDynLib rlang, .registration = TRUE
+NULL
diff --git a/R/stack.R b/R/stack.R
new file mode 100644
index 0000000..829c0a3
--- /dev/null
+++ b/R/stack.R
@@ -0,0 +1,681 @@
+#' Call stack information
+#'
+#' The `eval_` and `call_` families of functions provide a replacement
+#' for the base R functions prefixed with `sys.` (which are all about
+#' the context stack), as well as for [parent.frame()] (which is the
+#' only base R function for querying the call stack). The context
+#' stack includes all R-level evaluation contexts. It is linear in
+#' terms of execution history but due to lazy evaluation it is
+#' potentially nonlinear in terms of call history. The call stack
+#' history, on the other hand, is homogenous.
+#'
+#' `ctxt_frame()` and `call_frame()` return a `frame` object
+#' containing the following fields: `expr` and `env` (call expression
+#' and evaluation environment), `pos` and `caller_pos` (position of
+#' current frame in the context stack and position of the caller), and
+#' `fun` (function of the current frame). `ctxt_stack()` and
+#' `call_stack()` return a list of all context or call frames on the
+#' stack. Finally, `ctxt_depth()` and `call_depth()` report the
+#' current context position or the number of calling frames on the
+#' stack.
+#'
+#' The base R functions take two sorts of arguments to indicate which
+#' frame to query: `which` and `n`. The `n` argument is
+#' straightforward: it's the number of frames to go down the stack,
+#' with `n = 1` referring to the current context. The `which` argument
+#' is more complicated and changes meaning for values lower than
+#' 1. For the sake of consistency, the lazyeval functions all take the
+#' same kind of argument `n`. This argument has a single meaning (the
+#' number of frames to go down the stack) and cannot be lower than 1.
+#'
+#' Note finally that `parent.frame(1)` corresponds to
+#' `call_frame(2)$env`, as `n = 1` always refers to the current
+#' frame. This makes the `_frame()` and `_stack()` functions
+#' consistent: `ctxt_frame(2)` is the same as `ctxt_stack()[[2]]`.
+#' Also, `ctxt_depth()` returns one more frame than
+#' [base::sys.nframe()] because it counts the global frame. That is
+#' consistent with the `_stack()` functions which return the global
+#' frame as well. This way, `call_stack(call_depth())` is the same as
+#' `global_frame()`.
+#'
+#' @param n The number of frames to go back in the stack.
+#' @param clean Whether to post-process the call stack to clean
+#'   non-standard frames. If `TRUE`, suboptimal call-stack entries by
+#'   [base::eval()] will be cleaned up: the duplicate frame created by
+#'   `eval()` is eliminated.
+#' @param trim The number of layers of intervening frames to trim off
+#'   the stack. See [stack_trim()] and examples.
+#' @name stack
+#' @examples
+#' # Expressions within arguments count as contexts
+#' identity(identity(ctxt_depth())) # returns 2
+#'
+#' # But they are not part of the call stack because arguments are
+#' # evaluated within the calling function (or the global environment
+#' # if called at top level)
+#' identity(identity(call_depth())) # returns 0
+#'
+#' # The context stacks includes all intervening execution frames. The
+#' # call stack doesn't:
+#' f <- function(x) identity(x)
+#' f(f(ctxt_stack()))
+#' f(f(call_stack()))
+#'
+#' g <- function(cmd) cmd()
+#' f(g(ctxt_stack))
+#' f(g(call_stack))
+#'
+#' # The lazyeval _stack() functions return a list of frame
+#' # objects. Use purrr::transpose() or index a field with
+#' # purrr::map()'s to extract a particular field from a stack:
+#'
+#' # stack <- f(f(call_stack()))
+#' # purrr::map(stack, "env")
+#' # purrr::transpose(stack)$expr
+#'
+#' # current_frame() is an alias for ctxt_frame(1)
+#' fn <- function() list(current = current_frame(), first = ctxt_frame(1))
+#' fn()
+#'
+#' # While current_frame() is the top of the stack, global_frame() is
+#' # the bottom:
+#' fn <- function() {
+#'   n <- ctxt_depth()
+#'   ctxt_frame(n)
+#' }
+#' identical(fn(), global_frame())
+#'
+#'
+#' # ctxt_stack() returns a stack with all intervening frames. You can
+#' # trim layers of intervening frames with the trim argument:
+#' identity(identity(ctxt_stack()))
+#' identity(identity(ctxt_stack(trim = 1)))
+#'
+#' # ctxt_stack() is called within fn() with intervening frames:
+#' fn <- function(trim) identity(identity(ctxt_stack(trim = trim)))
+#' fn(0)
+#'
+#' # We can trim the first layer of those:
+#' fn(1)
+#'
+#' # The outside intervening frames (at the fn() call site) are still
+#' # returned, but can be trimmed as well:
+#' identity(identity(fn(1)))
+#' identity(identity(fn(2)))
+#'
+#' g <- function(trim) identity(identity(fn(trim)))
+#' g(2)
+#' g(3)
+NULL
+
+
+# Evaluation frames --------------------------------------------------
+
+new_frame <- function(x) {
+  structure(x, class = "frame")
+}
+#' @export
+print.frame <- function(x, ...) {
+  cat("<frame ", x$pos, ">", sep = "")
+  if (!x$pos) {
+    cat(" [global]\n")
+  } else {
+    cat(" (", x$caller_pos, ")\n", sep = "")
+  }
+
+  expr <- deparse(x$expr)
+  if (length(expr) > 1) {
+    expr <- paste(expr[[1]], "<...>")
+  }
+  cat("expr: ", expr, "\n", sep = "")
+  cat("env:  [", env_format(x$env), "]\n", sep = "")
+}
+#' Is object a frame?
+#'
+#' @param x Object to test
+#' @export
+is_frame <- function(x) {
+  inherits(x, "frame")
+}
+
+#' @rdname stack
+#' @export
+global_frame <- function() {
+  new_frame(list(
+    pos = 0L,
+    caller_pos = NA_integer_,
+    expr = NULL,
+    env = globalenv(),
+    fn = NULL,
+    fn_name = NULL
+  ))
+}
+#' @rdname stack
+#' @export
+current_frame <- function() {
+  ctxt_frame(2)
+}
+
+#' @rdname stack
+#' @export
+ctxt_frame <- function(n = 1) {
+  stopifnot(n > 0)
+  pos <- sys.nframe() - n
+
+  if (pos < 0L) {
+    stop("not that many frames on the stack", call. = FALSE)
+  } else if (pos == 0L) {
+    global_frame()
+  } else {
+    new_frame(list(
+      pos = pos,
+      caller_pos = sys.parent(n + 1),
+      expr = sys.call(-n),
+      env = sys.frame(-n),
+      fn = sys.function(-n),
+      fn_name = lang_name(sys.call(-n))
+    ))
+  }
+}
+
+# Positions of frames in the call stack up to `n`
+trail_make <- function(callers, n = NULL, clean = TRUE) {
+  n_ctxt <- length(callers)
+  if (is.null(n)) {
+    n_max <- n_ctxt
+  } else {
+    if (n > n_ctxt) {
+      stop("not that many frames on the evaluation stack", call. = FALSE)
+    }
+    n_max <- n + 1
+  }
+
+  state <- trail_next(callers, 1, clean)
+  if (!length(state$i) || state$i == 0) {
+    return(0L)
+  }
+  j <- 1
+
+  # Preallocate a sufficiently large vector
+  out <- integer(n_max)
+  out[j] <- state$i
+
+  while (state$i != 0 && j < n_max) {
+    j <- j + 1
+    n_ctxt <- length(state$callers)
+    next_pos <- n_ctxt - state$i + 1
+    state <- trail_next(state$callers, next_pos, clean)
+    out[j] <- state$i
+  }
+
+  # Return relevant subset
+  if (!is.null(n) && n > j) {
+    stop("not that many frames on the call stack", call. = FALSE)
+  }
+  out[seq_len(j)]
+}
+
+trail_next <- function(callers, i, clean) {
+  if (i == 0L) {
+    return(list(callers = callers, i = 0L))
+  }
+
+  i <- callers[i]
+
+  if (clean) {
+    # base::Recall() creates a custom context with the wrong sys.parent()
+    if (identical(sys.function(i - 1L), base::Recall)) {
+      i_pos <- trail_index(callers, i)
+      callers[i_pos] <- i - 1L
+    }
+
+    # The R-level eval() creates two contexts. We skip the second one
+    if (length(i) && is_prim_eval(sys.function(i))) {
+      n_ctxt <- length(callers)
+      special_eval_pos <- trail_index(callers, i)
+      callers <- callers[-special_eval_pos]
+      i <- i - 1L
+    }
+
+  }
+
+  list(callers = callers, i = i)
+}
+
+trail_index <- function(callers, i) {
+  n_ctxt <- length(callers)
+  n_ctxt - i + 1L
+}
+
+#' @rdname stack
+#' @export
+call_frame <- function(n = 1, clean = TRUE) {
+  stopifnot(n > 0)
+
+  eval_callers <- ctxt_stack_callers()
+  trail <- trail_make(eval_callers, n, clean = clean)
+  pos <- trail[n]
+
+  if (identical(pos, 0L)) {
+    return(global_frame())
+  }
+
+  frame <- new_frame(list(
+    pos = pos,
+    caller_pos = trail[n + 1],
+    expr = sys.call(pos),
+    env = sys.frame(pos),
+    fn = sys.function(pos),
+    fn_name = lang_name(sys.call(pos))
+  ))
+
+  if (clean) {
+    frame <- frame_clean_eval(frame)
+  }
+  frame
+}
+
+#' Get the environment of the caller frame
+#'
+#' `caller_frame()` is a shortcut for `call_frame(2)` and
+#' `caller_fn()` and `caller_env()` are shortcuts for
+#' `call_frame(2)$env` `call_frame(2)$fn`.
+#'
+#' @param n The number of generation to go back. Note that contrarily
+#'   to [call_frame()], 1 represents the parent frame rather than the
+#'   current frame.
+#' @seealso [call_frame()]
+#' @export
+caller_env <- function(n = 1) {
+  parent.frame(n + 1)
+}
+#' @rdname caller_env
+#' @export
+caller_frame <- function(n = 1) {
+  call_frame(n + 2)
+}
+#' @rdname caller_env
+#' @export
+caller_fn <- function(n = 1) {
+  call_frame(n + 2)$fn
+}
+
+
+# The _depth() functions count the global frame as well
+
+#' @rdname stack
+#' @export
+ctxt_depth <- function() {
+  sys.nframe()
+}
+#' @rdname stack
+#' @export
+call_depth <- function() {
+  eval_callers <- ctxt_stack_callers()
+  trail <- trail_make(eval_callers)
+  length(trail)
+}
+
+
+# Summaries ----------------------------------------------------------
+
+#' @rdname stack
+#' @export
+ctxt_stack <- function(n = NULL, trim = 0) {
+  stack_data <- list(
+    pos = ctxt_stack_trail(),
+    caller_pos = ctxt_stack_callers(),
+    expr = ctxt_stack_exprs(),
+    env = ctxt_stack_envs(),
+    fn = ctxt_stack_fns()
+  )
+
+  # Remove ctxt_stack() from stack
+  stack_data <- map(stack_data, drop_first)
+
+  stack_data <- stack_subset(stack_data, n)
+  stack_data$fn_name <- map(stack_data$expr, lang_name)
+
+  stack <- transpose(stack_data)
+  stack <- map(stack, new_frame)
+
+  if (is.null(n) || (length(n) && n > length(stack))) {
+    stack <- c(stack, list(global_frame()))
+  }
+  if (trim > 0) {
+    stack <- stack_trim(stack, n = trim + 1)
+  }
+
+  structure(stack, class = c("ctxt_stack", "stack"))
+}
+
+ctxt_stack_trail <- function() {
+  pos <- sys.nframe() - 1
+  seq(pos, 1)
+}
+ctxt_stack_exprs <- function() {
+  exprs <- sys.calls()
+  rev(drop_last(exprs))
+}
+ctxt_stack_envs <- function(n = 1) {
+  envs <- sys.frames()
+  rev(drop_last(envs))
+}
+ctxt_stack_callers <- function() {
+  callers <- sys.parents()
+  rev(drop_last(callers))
+}
+ctxt_stack_fns <- function() {
+  pos <- sys.nframe() - 1
+  map(seq(pos, 1), sys.function)
+}
+
+stack_subset <- function(stack_data, n) {
+  if (length(n)) {
+    stopifnot(n > 0)
+    n_stack <- length(stack_data[[1]])
+    if (n == n_stack + 1) {
+      # We'll add the global frame later
+      n <- n <- n - 1
+    } else if (n > n_stack + 1) {
+      stop("not that many frames on the stack", call. = FALSE)
+    }
+    stack_data <- map(stack_data, `[`, seq_len(n))
+  }
+  stack_data
+}
+
+#' @rdname stack
+#' @export
+call_stack <- function(n = NULL, clean = TRUE) {
+  eval_callers <- ctxt_stack_callers()
+  trail <- trail_make(eval_callers, n, clean = clean)
+
+  stack_data <- list(
+    pos = drop_last(trail),
+    caller_pos = drop_first(trail),
+    expr = map(trail, sys.call),
+    env = map(trail, sys.frame),
+    fn = map(trail, sys.function)
+  )
+  stack_data$fn_name <- map(stack_data$expr, lang_name)
+
+  stack <- transpose(stack_data)
+  stack <- map(stack, new_frame)
+  if (clean) {
+    stack <- map(stack, frame_clean_eval)
+  }
+
+  if (trail[length(trail)] == 0L) {
+    stack <- c(stack, list(global_frame()))
+  }
+
+  structure(stack, class = c("call_stack", "stack"))
+}
+
+frame_clean_eval <- function(frame) {
+  if (identical(frame$fn, base::eval)) {
+    # Use the environment from the context created in do_eval()
+    # (the context with the fake primitive call)
+    stopifnot(is_prim_eval(sys.function(frame$pos + 1)))
+    frame$env <- sys.frame(frame$pos + 1)
+  }
+
+  frame
+}
+
+#' Is object a stack?
+#' @param x An object to test
+#' @export
+is_stack <- function(x) inherits(x, "stack")
+
+#' @rdname is_stack
+#' @export
+is_eval_stack <- function(x) inherits(x, "ctxt_stack")
+
+#' @rdname is_stack
+#' @export
+is_call_stack <- function(x) inherits(x, "call_stack")
+
+#' @export
+`[.stack` <- function(x, i) {
+  structure(NextMethod(), class = class(x))
+}
+
+# Handles global_frame() whose `caller_pos` is NA
+sys_frame <- function(n) {
+  if (is.na(n)) {
+    NULL
+  } else {
+    sys.frame(n)
+  }
+}
+
+#' Find the position or distance of a frame on the evaluation stack
+#'
+#' The frame position on the stack can be computed by counting frames
+#' from the global frame (the bottom of the stack, the default) or
+#' from the current frame (the top of the stack).
+#'
+#' While this function returns the position of the frame on the
+#' evaluation stack, it can safely be called with intervening frames
+#' as those will be discarded.
+#'
+#' @param frame The environment of a frame. Can be any object with a
+#'   [get_env()] method. Note that for frame objects, the position from
+#'   the global frame is simply `frame$pos`. Alternatively, `frame`
+#'   can be an integer that represents the position on the stack (and
+#'   is thus returned as is if `from` is "global".
+#' @param from Whether to compute distance from the global frame (the
+#'   bottom of the evaluation stack), or from the current frame (the
+#'   top of the evaluation stack).
+#' @export
+#' @examples
+#' fn <- function() g(environment())
+#' g <- function(env) frame_position(env)
+#'
+#' # frame_position() returns the position of the frame on the evaluation stack:
+#' fn()
+#' identity(identity(fn()))
+#'
+#' # Note that it trims off intervening calls before counting so you
+#' # can safely nest it within other calls:
+#' g <- function(env) identity(identity(frame_position(env)))
+#' fn()
+#'
+#' # You can also ask for the position from the current frame rather
+#' # than the global frame:
+#' fn <- function() g(environment())
+#' g <- function(env) h(env)
+#' h <- function(env) frame_position(env, from = "current")
+#' fn()
+frame_position <- function(frame, from = c("global", "current")) {
+  stack <- stack_trim(ctxt_stack(), n = 2)
+
+  if (arg_match(from) == "global") {
+    frame_position_global(frame, stack)
+  } else {
+    caller_pos <- call_frame(2)$pos
+    frame_position_current(frame, stack, caller_pos)
+  }
+}
+
+frame_position_global <- function(frame, stack = NULL) {
+  if (is_frame(frame)) {
+    return(frame$pos)
+  } else if (is_integerish(frame)) {
+    return(frame)
+  }
+
+  frame <- get_env(frame)
+  stack <- stack %||% stack_trim(ctxt_stack(), n = 2)
+  envs <- pluck(stack, "env")
+
+  i <- 1
+  for (env in envs) {
+    if (identical(env, frame)) {
+      return(length(envs) - i)
+    }
+    i <- i + 1
+  }
+
+  abort("`frame` not found on evaluation stack")
+}
+
+frame_position_current <- function(frame, stack = NULL,
+                                   caller_pos = NULL) {
+  if (is_integerish(frame)) {
+    pos <- frame
+  } else {
+    stack <- stack %||% stack_trim(ctxt_stack(), n = 2)
+    pos <- frame_position_global(frame, stack)
+  }
+  caller_pos <- caller_pos %||% call_frame(2)$pos
+  caller_pos - pos + 1
+}
+
+
+#' Trim top call layers from the evaluation stack
+#'
+#' [ctxt_stack()] can be tricky to use in real code because all
+#' intervening frames are returned with the stack, including those at
+#' `ctxt_stack()` own call site. `stack_trim()` makes it easy to
+#' remove layers of intervening calls.
+#'
+#' @param stack An evaluation stack.
+#' @param n The number of call frames (not eval frames) to trim off
+#'   the top of the stack. In other words, the number of layers of
+#'   intervening frames to trim.
+#' @export
+#' @examples
+#' # Intervening frames appear on the evaluation stack:
+#' identity(identity(ctxt_stack()))
+#'
+#' # stack_trim() will trim the first n layers of calls:
+#' stack_trim(identity(identity(ctxt_stack())))
+#'
+#' # Note that it also takes care of calls intervening at its own call
+#' # site:
+#' identity(identity(
+#'   stack_trim(identity(identity(ctxt_stack())))
+#' ))
+#'
+#' # It is especially useful when used within a function that needs to
+#' # inspect the evaluation stack but should nonetheless be callable
+#' # within nested calls without side effects:
+#' stack_util <- function() {
+#'   # n = 2 means that two layers of intervening calls should be
+#'   # removed: The layer at ctxt_stack()'s call site (including the
+#'   # stack_trim() call), and the layer at stack_util()'s call.
+#'   stack <- stack_trim(ctxt_stack(), n = 2)
+#'   stack
+#' }
+#' user_fn <- function() {
+#'   # A user calls your stack utility with intervening frames:
+#'   identity(identity(stack_util()))
+#' }
+#' # These intervening frames won't appear in the evaluation stack
+#' identity(user_fn())
+stack_trim <- function(stack, n = 1) {
+  if (n < 1) {
+    return(stack)
+  }
+
+  # Add 1 to discard stack_trim()'s own intervening frames
+  caller_pos <- call_frame(n + 1, clean = FALSE)$pos
+
+  n_frames <- length(stack)
+  n_skip <- n_frames - caller_pos
+  stack[seq(n_skip, n_frames)]
+}
+
+is_frame_env <- function(env) {
+  for (frame in sys.frames()) {
+    if (identical(env, frame)) {
+      return(TRUE)
+    }
+  }
+  FALSE
+}
+
+
+#' Jump to or from a frame
+#'
+#' While [base::return()] can only return from the current local
+#' frame, these two functions will return from any frame on the
+#' current evaluation stack, between the global and the currently
+#' active context. They provide a way of performing arbitrary
+#' non-local jumps out of the function currently under evaluation.
+#'
+#' `return_from()` will jump out of `frame`. `return_to()` is a bit
+#' trickier. It will jump out of the frame located just before `frame`
+#' in the evaluation stack, so that control flow ends up in `frame`,
+#' at the location where the previous frame was called from.
+#'
+#' These functions should only be used rarely. These sort of non-local
+#' gotos can be hard to reason about in casual code, though they can
+#' sometimes be useful. Also, consider to use the condition system to
+#' perform non-local jumps.
+#'
+#' @param frame An environment, a frame object, or any object with an
+#'   [get_env()] method. The environment should be an evaluation
+#'   environment currently on the stack.
+#' @param value The return value.
+#' @export
+#' @examples
+#' # Passing fn() evaluation frame to g():
+#' fn <- function() {
+#'   val <- g(get_env())
+#'   cat("g returned:", val, "\n")
+#'   "normal return"
+#' }
+#' g <- function(env) h(env)
+#'
+#' # Here we return from fn() with a new return value:
+#' h <- function(env) return_from(env, "early return")
+#' fn()
+#'
+#' # Here we return to fn(). The call stack unwinds until the last frame
+#' # called by fn(), which is g() in that case.
+#' h <- function(env) return_to(env, "early return")
+#' fn()
+return_from <- function(frame, value = NULL) {
+  if (is_integerish(frame)) {
+    frame <- ctxt_frame(frame)
+  }
+
+  exit_env <- get_env(frame)
+  expr <- expr(return(!!value))
+  eval_bare(expr, exit_env)
+}
+
+#' @rdname return_from
+#' @export
+return_to <- function(frame, value = NULL) {
+  if (is_integerish(frame)) {
+    prev_pos <- frame - 1
+  } else {
+    env <- get_env(frame)
+    distance <- frame_position_current(env)
+    prev_pos <- distance - 1
+  }
+
+  prev_frame <- ctxt_frame(prev_pos)
+  return_from(prev_frame, value)
+}
+
+
+#' Inspect a call
+#'
+#' This function is useful for quick testing and debugging when you
+#' manipulate expressions and calls. It lets you check that a function
+#' is called with the right arguments. This can be useful in unit
+#' tests for instance. Note that this is just a simple wrapper around
+#' [base::match.call()].
+#'
+#' @param ... Arguments to display in the returned call.
+#' @export
+#' @examples
+#' call_inspect(foo(bar), "" %>% identity())
+#' invoke(call_inspect, list(a = mtcars, b = letters))
+call_inspect <- function(...) match.call()
diff --git a/R/types.R b/R/types.R
new file mode 100644
index 0000000..f01b0e7
--- /dev/null
+++ b/R/types.R
@@ -0,0 +1,685 @@
+#' Type predicates
+#'
+#' These type predicates aim to make type testing in R more
+#' consistent. They are wrappers around [base::typeof()], so operate
+#' at a level beneath S3/S4 etc.
+#'
+#' Compared to base R functions:
+#'
+#' * The predicates for vectors include the `n` argument for
+#'   pattern-matching on the vector length.
+#'
+#' * Unlike `is.atomic()`, `is_atomic()` does not return `TRUE` for
+#'   `NULL`.
+#'
+#' * Unlike `is.vector()`, `is_vector()` test if an object is an
+#'   atomic vector or a list. `is.vector` checks for the presence of
+#'   attributes (other than name).
+#'
+#' * `is_function()` returns `TRUE` only for regular functions, not
+#'   special or primitive functions.
+#'
+#' @param x Object to be tested.
+#' @param n Expected length of a vector.
+#' @param encoding Expected encoding of a string or character
+#'   vector. One of `UTF-8`, `latin1`, or `unknown`.
+#' @seealso [bare-type-predicates] [scalar-type-predicates]
+#' @name type-predicates
+NULL
+
+#' @export
+#' @rdname type-predicates
+is_list <- function(x, n = NULL) {
+  if (typeof(x) != "list") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  TRUE
+}
+
+parsable_atomic_types <- c("logical", "integer", "double", "complex", "character")
+atomic_types <- c(parsable_atomic_types, "raw")
+#' @export
+#' @rdname type-predicates
+is_atomic <- function(x, n = NULL) {
+  if (!typeof(x) %in% atomic_types) return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  TRUE
+}
+#' @export
+#' @rdname type-predicates
+is_vector <- function(x, n = NULL) {
+  is_atomic(x, n) || is_list(x, n)
+}
+
+#' @export
+#' @rdname type-predicates
+is_integer <- function(x, n = NULL) {
+  if (typeof(x) != "integer") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  TRUE
+}
+#' @export
+#' @rdname type-predicates
+is_double <- function(x, n = NULL) {
+  if (typeof(x) != "double") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  TRUE
+}
+#' @export
+#' @rdname type-predicates
+is_character <- function(x, n = NULL, encoding = NULL) {
+  if (typeof(x) != "character") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  stopifnot(typeof(encoding) %in% c("character", "NULL"))
+  if (!is_null(encoding) && !all(chr_encoding(x) %in% encoding)) return(FALSE)
+  TRUE
+}
+#' @export
+#' @rdname type-predicates
+is_logical <- function(x, n = NULL) {
+  if (typeof(x) != "logical") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  TRUE
+}
+#' @export
+#' @rdname type-predicates
+is_raw <- function(x, n = NULL) {
+  if (typeof(x) != "raw") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  TRUE
+}
+#' @export
+#' @rdname type-predicates
+is_bytes <- is_raw
+
+#' @export
+#' @rdname type-predicates
+is_null <- function(x) {
+  typeof(x) == "NULL"
+}
+
+#' Scalar type predicates
+#'
+#' These predicates check for a given type and whether the vector is
+#' "scalar", that is, of length 1.
+#' @inheritParams type-predicates
+#' @param x object to be tested.
+#' @seealso [type-predicates], [bare-type-predicates]
+#' @name scalar-type-predicates
+NULL
+
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_list <- function(x) {
+  is_list(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_atomic <- function(x) {
+  is_atomic(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_vector <- function(x) {
+  is_vector(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_integer <- function(x) {
+  is_integer(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_double <- function(x) {
+  is_double(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_character <- function(x, encoding = NULL) {
+  is_character(x, encoding = encoding) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_logical <- function(x) {
+  is_logical(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_raw <- function(x) {
+  is_raw(x) && length(x) == 1
+}
+#' @export
+#' @rdname scalar-type-predicates
+is_string <- is_scalar_character
+#' @export
+#' @rdname scalar-type-predicates
+is_scalar_bytes <- is_scalar_raw
+
+#' Bare type predicates
+#'
+#' These predicates check for a given type but only return `TRUE` for
+#' bare R objects. Bare objects have no class attributes. For example,
+#' a data frame is a list, but not a bare list.
+#'
+#' * The predicates for vectors include the `n` argument for
+#'   pattern-matching on the vector length.
+#'
+#' * Like [is_atomic()] and unlike base R `is.atomic()`,
+#'   `is_bare_atomic()` does not return `TRUE` for `NULL`.
+#'
+#' * Unlike base R `is.numeric()`, `is_bare_double()` only returns
+#'   `TRUE` for floating point numbers.
+#' @inheritParams type-predicates
+#' @seealso [type-predicates], [scalar-type-predicates]
+#' @name bare-type-predicates
+NULL
+
+#' @export
+#' @rdname bare-type-predicates
+is_bare_list <- function(x, n = NULL) {
+  !is.object(x) && is_list(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_atomic <- function(x, n = NULL) {
+  !is.object(x) && is_atomic(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_vector <- function(x, n = NULL) {
+  is_bare_atomic(x) || is_bare_list(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_double <- function(x, n = NULL) {
+  !is.object(x) && is_double(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_integer <- function(x, n = NULL) {
+  !is.object(x) && is_integer(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_numeric <- function(x, n = NULL) {
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  !is.object(x) && typeof(x) %in% c("double", "integer")
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_character <- function(x, n = NULL, encoding = NULL) {
+  !is.object(x) && is_character(x, n, encoding = encoding)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_logical <- function(x, n = NULL) {
+  !is.object(x) && is_logical(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_raw <- function(x, n = NULL) {
+  !is.object(x) && is_raw(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_string <- function(x, n = NULL) {
+  !is.object(x) && is_string(x, n)
+}
+#' @export
+#' @rdname bare-type-predicates
+is_bare_bytes <- is_bare_raw
+
+
+#' Is object an empty vector or NULL?
+#'
+#' @param x object to test
+#' @export
+#' @examples
+#' is_empty(NULL)
+#' is_empty(list())
+#' is_empty(list(NULL))
+is_empty <- function(x) length(x) == 0
+
+#' Is object an environment?
+#'
+#' `is_bare_env()` tests whether `x` is an environment without a s3 or
+#' s4 class.
+#'
+#' @inheritParams is_empty
+#' @export
+is_env <- function(x) {
+  typeof(x) == "environment"
+}
+#' @rdname is_env
+#' @export
+is_bare_env <- function(x) {
+  !is.object(x) && typeof(x) == "environment"
+}
+
+#' Is object identical to TRUE or FALSE?
+#'
+#' These functions bypass R's automatic conversion rules and check
+#' that `x` is literally `TRUE` or `FALSE`.
+#' @inheritParams is_empty
+#' @export
+#' @examples
+#' is_true(TRUE)
+#' is_true(1)
+#'
+#' is_false(FALSE)
+#' is_false(0)
+is_true <- function(x) {
+  identical(x, TRUE)
+}
+#' @rdname is_true
+#' @export
+is_false <- function(x) {
+  identical(x, FALSE)
+}
+
+#' Is a vector integer-like?
+#'
+#' These predicates check whether R considers a number vector to be
+#' integer-like, according to its own tolerance check (which is in
+#' fact delegated to the C library). This function is not adapted to
+#' data analysis, see the help for [base::is.integer()] for examples
+#' of how to check for whole numbers.
+#'
+#' @seealso [is_bare_numeric()] for testing whether an object is a
+#'   base numeric type (a bare double or integer vector).
+#' @inheritParams type-predicates
+#' @export
+#' @examples
+#' is_integerish(10L)
+#' is_integerish(10.0)
+#' is_integerish(10.000001)
+#' is_integerish(TRUE)
+is_integerish <- function(x, n = NULL) {
+  if (typeof(x) == "integer") return(TRUE)
+  if (typeof(x) != "double") return(FALSE)
+  if (!is_null(n) && length(x) != n) return(FALSE)
+  all(x == as.integer(x))
+}
+#' @rdname is_integerish
+#' @export
+is_bare_integerish <- function(x, n = NULL) {
+  !is.object(x) && is_integerish(x, n)
+}
+#' @rdname is_integerish
+#' @export
+is_scalar_integerish <- function(x) {
+  !is.object(x) && is_integerish(x, 1L)
+}
+
+#' Base type of an object
+#'
+#' This is equivalent to [base::typeof()] with a few differences that
+#' make dispatching easier:
+#' * The type of one-sided formulas is "quote".
+#' * The type of character vectors of length 1 is "string".
+#' * The type of special and builtin functions is "primitive".
+#'
+#' @param x An R object.
+#' @export
+#' @examples
+#' type_of(10L)
+#'
+#' # Quosures are treated as a new base type but not formulas:
+#' type_of(quo(10L))
+#' type_of(~10L)
+#'
+#' # Compare to base::typeof():
+#' typeof(quo(10L))
+#'
+#' # Strings are treated as a new base type:
+#' type_of(letters)
+#' type_of(letters[[1]])
+#'
+#' # This is a bit inconsistent with the core language tenet that data
+#' # types are vectors. However, treating strings as a different
+#' # scalar type is quite helpful for switching on function inputs
+#' # since so many arguments expect strings:
+#' switch_type("foo", character = abort("vector!"), string = "result")
+#'
+#' # Special and builtin primitives are both treated as primitives.
+#' # That's because it is often irrelevant which type of primitive an
+#' # input is:
+#' typeof(list)
+#' typeof(`$`)
+#' type_of(list)
+#' type_of(`$`)
+type_of <- function(x) {
+  type <- typeof(x)
+  if (is_formulaish(x)) {
+    if (identical(node_car(x), sym_def)) {
+      "definition"
+    } else {
+      "formula"
+    }
+  } else if (type == "character") {
+    if (length(x) == 1) "string" else "character"
+  } else if (type %in% c("builtin", "special")) {
+    "primitive"
+  } else {
+    type
+  }
+}
+
+#' Dispatch on base types
+#'
+#' `switch_type()` is equivalent to
+#' \code{\link[base]{switch}(\link{type_of}(x, ...))}, while
+#' `switch_class()` switchpatches based on `class(x)`. The `coerce_`
+#' versions are intended for type conversion and provide a standard
+#' error message when conversion fails.
+#'
+#' @param .x An object from which to dispatch.
+#' @param ... Named clauses. The names should be types as returned by
+#'   [type_of()].
+#' @param .to This is useful when you switchpatch within a coercing
+#'   function. If supplied, this should be a string indicating the
+#'   target type. A catch-all clause is then added to signal an error
+#'   stating the conversion failure. This type is prettified unless
+#'   `.to` inherits from the S3 class `"AsIs"` (see [base::I()]).
+#' @seealso [switch_lang()]
+#' @export
+#' @examples
+#' switch_type(3L,
+#'   double = "foo",
+#'   integer = "bar",
+#'   "default"
+#' )
+#'
+#' # Use the coerce_ version to get standardised error handling when no
+#' # type matches:
+#' to_chr <- function(x) {
+#'   coerce_type(x, "a chr",
+#'     integer = as.character(x),
+#'     double = as.character(x)
+#'   )
+#' }
+#' to_chr(3L)
+#'
+#' # Strings have their own type:
+#' switch_type("str",
+#'   character = "foo",
+#'   string = "bar",
+#'   "default"
+#' )
+#'
+#' # Use a fallthrough clause if you need to dispatch on all character
+#' # vectors, including strings:
+#' switch_type("str",
+#'   string = ,
+#'   character = "foo",
+#'   "default"
+#' )
+#'
+#' # special and builtin functions are treated as primitive, since
+#' # there is usually no reason to treat them differently:
+#' switch_type(base::list,
+#'   primitive = "foo",
+#'   "default"
+#' )
+#' switch_type(base::`$`,
+#'   primitive = "foo",
+#'   "default"
+#' )
+#'
+#' # closures are not primitives:
+#' switch_type(rlang::switch_type,
+#'   primitive = "foo",
+#'   "default"
+#' )
+switch_type <- function(.x, ...) {
+  switch(type_of(.x), ...)
+}
+#' @rdname switch_type
+#' @export
+coerce_type <- function(.x, .to, ...) {
+  switch(type_of(.x), ..., abort_coercion(.x, .to))
+}
+#' @rdname switch_type
+#' @export
+switch_class <- function(.x, ...) {
+  switch(class(.x), ...)
+}
+#' @rdname switch_type
+#' @export
+coerce_class <- function(.x, .to, ...) {
+  switch(class(.x), ..., abort_coercion(.x, .to))
+}
+abort_coercion <- function(x, to_type) {
+  x_type <- friendly_type(type_of(x))
+  if (!inherits(to_type, "AsIs")) {
+    to_type <- friendly_type(to_type)
+  }
+  abort(paste0("Can't convert ", x_type, " to ", to_type))
+}
+
+#' Format a type for error messages
+#'
+#' @param type A type as returned by [type_of()] or [lang_type_of()].
+#' @return A string of the prettified type, qualified with an
+#'   indefinite article.
+#' @export
+#' @examples
+#' friendly_type("logical")
+#' friendly_type("integer")
+#' friendly_type("string")
+#' @export
+friendly_type <- function(type) {
+  friendly <- friendly_type_of(type)
+  if (!is_null(friendly)) {
+    return(friendly)
+  }
+
+  friendly <- friendly_lang_type_of(type)
+  if (!is_null(friendly)) {
+    return(friendly)
+  }
+
+  friendly <- friendly_expr_type_of(type)
+  if (!is_null(friendly)) {
+    return(friendly)
+  }
+
+  type
+}
+
+friendly_type_of <- function(type) {
+  switch(type,
+    logical = "a logical vector",
+    integer = "an integer vector",
+    numeric = ,
+    double = "a double vector",
+    complex = "a complex vector",
+    character = "a character vector",
+    raw = "a raw vector",
+    string = "a string",
+    list = "a list",
+
+    NULL = "NULL",
+    environment = "an environment",
+    externalptr = "a pointer",
+    weakref = "a weak reference",
+    S4 = "an S4 object",
+
+    name = ,
+    symbol = "a symbol",
+    language = "a call",
+    pairlist = "a pairlist node",
+    expression = "an expression vector",
+    quosure = "a quosure",
+
+    char = "an internal string",
+    promise = "an internal promise",
+    ... = "an internal dots object",
+    any = "an internal `any` object",
+    bytecode = "an internal bytecode object",
+
+    primitive = ,
+    builtin = ,
+    special = "a primitive function",
+    closure = "a function"
+  )
+}
+
+friendly_lang_type_of <- function(type) {
+  switch(type,
+    named = "a named call",
+    namespaced = "a namespaced call",
+    recursive = "a recursive call",
+    inlined = "an inlined call"
+  )
+}
+
+friendly_expr_type_of <- function(type) {
+  switch(type,
+    NULL = "NULL",
+    name = ,
+    symbol = "a symbol",
+    language = "a call",
+    pairlist = "a pairlist node",
+    literal = "a syntactic literal",
+    missing = "the missing argument"
+  )
+}
+
+#' Dispatch on call type
+#'
+#' `switch_lang()` dispatches clauses based on the subtype of call, as
+#' determined by `lang_type_of()`. The subtypes are based on the type
+#' of call head (see details).
+#'
+#' Calls (objects of type `language`) do not necessarily call a named
+#' function. They can also call an anonymous function or the result of
+#' some other expression. The language subtypes are organised around
+#' the kind of object being called:
+#'
+#' * For regular calls to named function, `switch_lang()` returns
+#'   "named".
+#'
+#' * Sometimes the function being called is the result of another
+#'   function call, e.g. `foo()()`, or the result of another
+#'   subsetting call, e.g. `foo$bar()` or `foo at bar()`. In this case,
+#'   the call head is not a symbol, it is another call (e.g. to the
+#'   infix functions `$` or `@`). The call subtype is said to be
+#'   "recursive".
+#'
+#' * A special subset of recursive calls are namespaced calls like
+#'   `foo::bar()`. `switch_lang()` returns "namespaced" for these
+#'   calls. It is generally a good idea if your function treats
+#'   `bar()` and `foo::bar()` similarly.
+#'
+#' * Finally, it is possible to have a literal (see [is_expr()] for a
+#'   definition of literals) as call head. In most cases, this will be
+#'   a function inlined in the call (this is sometimes an expedient
+#'   way of dealing with scoping issues). For calls with a literal
+#'   node head, `switch_lang()` returns "inlined". Note that if a call
+#'   head contains a literal that is not function, something went
+#'   wrong and using that object will probably make R crash.
+#'   `switch_lang()` issues an error in this case.
+#'
+#' The reason we use the term _node head_ is because calls are
+#' structured as tree objects. This makes sense because the best
+#' representation for language code is a parse tree, with the tree
+#' hierarchy determined by the order of operations. See [pairlist] for
+#' more on this.
+#'
+#' @inheritParams switch_type
+#' @param .x,x A language object (a call). If a formula quote, the RHS
+#'   is extracted first.
+#' @param ... Named clauses. The names should be types as returned by
+#'   `lang_type_of()`.
+#' @export
+#' @examples
+#' # Named calls:
+#' lang_type_of(~foo())
+#'
+#' # Recursive calls:
+#' lang_type_of(~foo$bar())
+#' lang_type_of(~foo()())
+#'
+#' # Namespaced calls:
+#' lang_type_of(~base::list())
+#'
+#' # For an inlined call, let's inline a function in the head node:
+#' call <- quote(foo(letters))
+#' call[[1]] <- base::toupper
+#'
+#' call
+#' lang_type_of(call)
+switch_lang <- function(.x, ...) {
+  switch(lang_type_of(.x), ...)
+}
+#' @rdname switch_lang
+#' @export
+coerce_lang <- function(.x, .to, ...) {
+  msg <- paste0("Can't convert ", type_of(.x), " to ", .to, "")
+  switch(lang_type_of(.x), ..., abort(msg))
+}
+#' @rdname switch_lang
+#' @export
+lang_type_of <- function(x) {
+  x <- get_expr(x)
+  stopifnot(typeof(x) == "language")
+
+  type <- typeof(node_car(x))
+  if (type == "symbol") {
+    "named"
+  } else if (is_namespaced_symbol(node_car(x))) {
+    "namespaced"
+  } else if (type == "language") {
+    "recursive"
+  } else if (type %in% c("closure", "builtin", "special")) {
+    "inlined"
+  } else {
+    abort("corrupt language object")
+  }
+}
+
+#' Is an object copyable?
+#'
+#' When an object is modified, R generally copies it (sometimes
+#' lazily) to enforce [value
+#' semantics](https://en.wikipedia.org/wiki/Value_semantics).
+#' However, some internal types are uncopyable. If you try to copy
+#' them, either with `<-` or by argument passing, you actually create
+#' references to the original object rather than actual
+#' copies. Modifying these references can thus have far reaching side
+#' effects.
+#'
+#' @param x An object to test.
+#' @export
+#' @examples
+#' # Let's add attributes with structure() to uncopyable types. Since
+#' # they are not copied, the attributes are changed in place:
+#' env <- env()
+#' structure(env, foo = "bar")
+#' env
+#'
+#' # These objects that can only be changed with side effect are not
+#' # copyable:
+#' is_copyable(env)
+#'
+#' structure(base::list, foo = "bar")
+#' str(base::list)
+is_copyable <- function(x) {
+  switch_type(x,
+    NULL = ,
+    char = ,
+    symbol = ,
+    primitive = ,
+    environment = ,
+    pointer =
+      FALSE,
+    TRUE
+  )
+}
+
+is_equal <- function(x, y) {
+  identical(x, y)
+}
+is_reference <- function(x, y) {
+  .Call(rlang_is_reference, x, y)
+}
diff --git a/R/utils.R b/R/utils.R
new file mode 100644
index 0000000..197054f
--- /dev/null
+++ b/R/utils.R
@@ -0,0 +1,99 @@
+
+substitute_ <- function(x, env) {
+  if (identical(env, globalenv())) {
+    env <- as.list(env)
+  }
+
+  call <- substitute(substitute(x, env), list(x = x))
+  eval_bare(call)
+}
+
+drop_last <- function(x) {
+  x[-length(x)]
+}
+drop_first <- function(x) {
+  x[-1]
+}
+set_names2 <- function(x, nms = names2(x)) {
+  empty <- nms == ""
+  nms[empty] <- x[empty]
+  names(x) <- nms
+  x
+}
+
+imap <- function(.x, .f, ...) {
+  idx <- names(.x) %||% seq_along(.x)
+  out <- Map(.f, idx, .x, ...)
+  names(out) <- names(.x)
+  out
+}
+imap_chr <- function(.x, .f, ...) {
+  as.vector(imap(.x, .f, ...), "character")
+}
+
+map_around <- function(.x, .neighbour = c("right", "left"), .f, ...) {
+  where <- arg_match(.neighbour)
+  n <- length(.x)
+  out <- vector("list", n)
+
+  if (n == 0) {
+    return(.x)
+  }
+
+  if (n == 1) {
+    out[[1]] <- .f(.x[[1]], missing_arg(), ...)
+    return(out)
+  }
+
+  if (n > 1 && where == "right") {
+    neighbours <- .x[seq(2, n)]
+    idx <- seq_len(n - 1)
+    out[idx] <- Map(.f, .x[idx], neighbours, ...)
+    out[[n]] <- .f(.x[[n]], missing_arg(), ...)
+    return(out)
+  }
+
+  if (n > 1 && where == "left") {
+    neighbours <- .x[seq(1, n - 1)]
+    idx <- seq(2, n)
+    out[idx] <- Map(.f, .x[idx], neighbours, ...)
+    out[[1]] <- .f(.x[[1]], missing_arg(), ...)
+    return(out)
+  }
+
+  stop("unimplemented")
+}
+
+discard_unnamed <- function(x) {
+  if (is_env(x)) {
+    x
+  } else {
+    discard(x, names2(x) == "")
+  }
+}
+
+sxp_address <- function(x) {
+  .Call(rlang_sxp_address, x)
+}
+
+captureArg <- function(x, strict = TRUE) {
+  caller_env <- parent.frame()
+
+  if (identical(caller_env, globalenv())) {
+    stop("must be called in a function")
+  }
+  if (missing(x)) {
+    stop("argument \"x\" is missing")
+  }
+
+  .Call(rlang_capturearg, NULL, NULL, pairlist(caller_env, strict), get_env())
+}
+captureDots <- function(strict = TRUE) {
+  caller_env <- parent.frame()
+
+  if (!exists("...", caller_env)) {
+    stop("must be called in a function where dots exist")
+  }
+
+  .Call(rlang_capturedots, NULL, NULL, pairlist(caller_env, strict), get_env())
+}
diff --git a/R/vector-chr.R b/R/vector-chr.R
new file mode 100644
index 0000000..2917215
--- /dev/null
+++ b/R/vector-chr.R
@@ -0,0 +1,256 @@
+#' Create a string
+#'
+#' These base-type constructors allow more control over the creation
+#' of strings in R. They take character vectors or string-like objects
+#' (integerish or raw vectors), and optionally set the encoding. The
+#' string version checks that the input contains a scalar string.
+#'
+#' @param x A character vector or a vector or list of string-like
+#'   objects.
+#' @param encoding If non-null, passed to [set_chr_encoding()] to add
+#'   an encoding mark. This is only declarative, no encoding
+#'   conversion is performed.
+#' @seealso `set_chr_encoding()` for more information
+#'   about encodings in R.
+#' @export
+#' @examples
+#' # As everywhere in R, you can specify a string with Unicode
+#' # escapes. The characters corresponding to Unicode codepoints will
+#' # be encoded in UTF-8, and the string will be marked as UTF-8
+#' # automatically:
+#' cafe <- string("caf\uE9")
+#' str_encoding(cafe)
+#' as_bytes(cafe)
+#'
+#' # In addition, string() provides useful conversions to let
+#' # programmers control how the string is represented in memory. For
+#' # encodings other than UTF-8, you'll need to supply the bytes in
+#' # hexadecimal form. If it is a latin1 encoding, you can mark the
+#' # string explicitly:
+#' cafe_latin1 <- string(c(0x63, 0x61, 0x66, 0xE9), "latin1")
+#' str_encoding(cafe_latin1)
+#' as_bytes(cafe_latin1)
+string <- function(x, encoding = NULL) {
+  if (is_integerish(x)) {
+    x <- rawToChar(as.raw(x))
+  } else if (is_raw(x)) {
+    x <- rawToChar(x)
+  } else if (!is_string(x)) {
+    abort("`x` must be a string or raw vector")
+  }
+
+  set_chr_encoding(x, encoding)
+}
+
+#' Coerce to a character vector and attempt encoding conversion
+#'
+#' @description
+#'
+#' Unlike specifying the `encoding` argument in `as_string()` and
+#' `as_character()`, which is only declarative, these functions
+#' actually attempt to convert the encoding of their input. There are
+#' two possible cases:
+#'
+#' * The string is tagged as UTF-8 or latin1, the only two encodings
+#'   for which R has specific support. In this case, converting to the
+#'   same encoding is a no-op, and converting to native always works
+#'   as expected, as long as the native encoding, the one specified by
+#'   the `LC_CTYPE` locale (see [mut_utf8_locale()]) has support for
+#'   all characters occurring in the strings. Unrepresentable
+#'   characters are serialised as unicode points: "<U+xxxx>".
+#'
+#' * The string is not tagged. R assumes that it is encoded in the
+#'   native encoding. Conversion to native is a no-op, and conversion
+#'   to UTF-8 should work as long as the string is actually encoded in
+#'   the locale codeset.
+#'
+#' @param x An object to coerce.
+#' @export
+#' @examples
+#' # Let's create a string marked as UTF-8 (which is guaranteed by the
+#' # Unicode escaping in the string):
+#' utf8 <- "caf\uE9"
+#' str_encoding(utf8)
+#' as_bytes(utf8)
+#'
+#' # It can then be converted to a native encoding, that is, the
+#' # encoding specified in the current locale:
+#' \dontrun{
+#' mut_latin1_locale()
+#' latin1 <- as_native_string(utf8)
+#' str_encoding(latin1)
+#' as_bytes(latin1)
+#' }
+as_utf8_character <- function(x) {
+  enc2utf8(as_character(x))
+}
+#' @rdname as_utf8_character
+#' @export
+as_native_character <- function(x) {
+  enc2native(as_character(x))
+}
+#' @rdname as_utf8_character
+#' @export
+as_utf8_string <- function(x) {
+  coerce_type(x, "an UTF-8 string",
+    symbol = ,
+    string = enc2utf8(as_string(x))
+  )
+}
+#' @rdname as_utf8_character
+#' @export
+as_native_string <- function(x) {
+  coerce_type(x, "a natively encoded string",
+    symbol = ,
+    string = enc2native(as_string(x))
+  )
+}
+
+#' Set encoding of a string or character vector
+#'
+#' R has specific support for UTF-8 and latin1 encoded strings. This
+#' mostly matters for internal conversions. Thanks to this support,
+#' you can reencode strings to UTF-8 or latin1 for internal
+#' processing, and return these strings without having to convert them
+#' back to the native encoding. However, it is important to make sure
+#' the encoding mark has not been lost in the process, otherwise the
+#' output will be treated as if encoded according to the current
+#' locale (see [mut_utf8_locale()] for documentation about locale
+#' codesets), which is not appropriate if it does not coincide with
+#' the actual encoding. In those situations, you can use these
+#' functions to ensure an encoding mark in your strings.
+#'
+#' @param x A string or character vector.
+#' @param encoding Either an encoding specially handled by R
+#'   (`"UTF-8"` or `"latin1"`), `"bytes"` to inhibit all encoding
+#'   conversions, or `"unknown"` if the string should be treated as
+#'   encoded in the current locale codeset.
+#' @seealso [mut_utf8_locale()] about the effects of the locale, and
+#'   [as_utf8_string()] about encoding conversion.
+#' @export
+#' @examples
+#' # Encoding marks are always ignored on ASCII strings:
+#' str_encoding(set_str_encoding("cafe", "UTF-8"))
+#'
+#' # You can specify the encoding of strings containing non-ASCII
+#' # characters:
+#' cafe <- string(c(0x63, 0x61, 0x66, 0xC3, 0xE9))
+#' str_encoding(cafe)
+#' str_encoding(set_str_encoding(cafe, "UTF-8"))
+#'
+#'
+#' # It is important to consistently mark the encoding of strings
+#' # because R and other packages perform internal string conversions
+#' # all the time. Here is an example with the names attribute:
+#' latin1 <- string(c(0x63, 0x61, 0x66, 0xE9), "latin1")
+#' latin1 <- set_names(latin1)
+#'
+#' # The names attribute is encoded in latin1 as we would expect:
+#' str_encoding(names(latin1))
+#'
+#' # However the names are converted to UTF-8 by the c() function:
+#' str_encoding(names(c(latin1)))
+#' as_bytes(names(c(latin1)))
+#'
+#' # Bad things happen when the encoding marker is lost and R performs
+#' # a conversion. R will assume that the string is encoded according
+#' # to the current locale:
+#' \dontrun{
+#' bad <- set_names(set_str_encoding(latin1, "unknown"))
+#' mut_utf8_locale()
+#'
+#' str_encoding(names(c(bad)))
+#' as_bytes(names(c(bad)))
+#' }
+set_chr_encoding <- function(x, encoding = c("unknown", "UTF-8", "latin1", "bytes")) {
+  if (!is_null(encoding)) {
+    Encoding(x) <- arg_match(encoding)
+  }
+  x
+}
+#' @rdname set_chr_encoding
+#' @export
+chr_encoding <- function(x) {
+  Encoding(x)
+}
+#' @rdname set_chr_encoding
+#' @export
+set_str_encoding <- function(x, encoding = c("unknown", "UTF-8", "latin1", "bytes")) {
+  stopifnot(is_string(x))
+  set_chr_encoding(x, encoding)
+}
+#' @rdname set_chr_encoding
+#' @export
+str_encoding <- function(x) {
+  stopifnot(is_string(x))
+  Encoding(x)
+}
+
+#' Set the locale's codeset for testing
+#'
+#' Setting a locale's codeset (specifically, the `LC_CTYPE` category)
+#' produces side effects in R's handling of strings. The most
+#' important of these affects how the R parser marks strings. R has
+#' specific internal support for latin1 (single-byte encoding) and
+#' UTF-8 (multi-bytes variable-width encoding) strings. If the locale
+#' codeset is latin1 or UTF-8, the parser will mark all strings with
+#' the corresponding encoding. It is important for strings to have
+#' consistent encoding markers, as they determine a number of internal
+#' encoding conversions when R or packages handle strings (see
+#' [set_str_encoding()] for some examples).
+#'
+#' If you are changing the locale encoding for testing purposes, you
+#' need to be aware that R caches strings and symbols to save
+#' memory. If you change the locale during an R session, it can lead
+#' to surprising and difficult to reproduce results. In doubt, restart
+#' your R session.
+#'
+#' Note that these helpers are only provided for testing interactively
+#' the effects of changing locale codeset. They let you quickly change
+#' the default text encoding to latin1, UTF-8, or non-UTF-8 MBCS. They
+#' are not widely tested and do not provide a way of setting the
+#' language and region of the locale. They have permanent side effects
+#' and should probably not be used in package examples, unit tests, or
+#' in the course of a data analysis. Note finally that
+#' `mut_utf8_locale()` will not work on Windows as only latin1 and
+#' MBCS locales are supported on this OS.
+#'
+#' @return The previous locale (invisibly).
+#' @export
+mut_utf8_locale <- function() {
+  if (.Platform$OS.type == "windows") {
+    warn("UTF-8 is not supported on Windows")
+  } else {
+    inform("Locale codeset is now UTF-8")
+    mut_ctype("en_US.UTF-8")
+  }
+}
+#' @rdname mut_utf8_locale
+#' @export
+mut_latin1_locale <- function() {
+  if (.Platform$OS.type == "windows") {
+    locale <- "English_United States.1252"
+  } else {
+    locale <- "en_US.ISO8859-1"
+  }
+  inform("Locale codeset is now latin1")
+  mut_ctype(locale)
+}
+#' @rdname mut_utf8_locale
+#' @export
+mut_mbcs_locale <- function() {
+  if (.Platform$OS.type == "windows") {
+    locale <- "English_United States.932"
+  } else {
+    locale <- "ja_JP.SJIS"
+  }
+  inform("Locale codeset is now of non-UTF-8 MBCS type")
+  mut_ctype(locale)
+}
+mut_ctype <- function(x) {
+  if (is_null(x)) return(x)
+  # Workaround bug in Sys.setlocale()
+  old <- Sys.getlocale("LC_CTYPE")
+  Sys.setlocale("LC_CTYPE", locale = x)
+  invisible(old)
+}
diff --git a/R/vector-coercion.R b/R/vector-coercion.R
new file mode 100644
index 0000000..1e3dacc
--- /dev/null
+++ b/R/vector-coercion.R
@@ -0,0 +1,227 @@
+#' Coerce an object to a base type
+#'
+#' These are equivalent to the base functions (e.g. [as.logical()],
+#' [as.list()], etc), but perform coercion rather than conversion.
+#' This means they are not generic and will not call S3 conversion
+#' methods. They only attempt to coerce the base type of their
+#' input. In addition, they have stricter implicit coercion rules and
+#' will never attempt any kind of parsing. E.g. they will not try to
+#' figure out if a character vector represents integers or booleans.
+#' Finally, they have treat attributes consistently, unlike the base R
+#' functions: all attributes except names are removed.
+#'
+#'
+#' @section Coercion to logical and numeric atomic vectors:
+#'
+#' * To logical vectors: Integer and integerish double vectors. See
+#'   [is_integerish()].
+#' * To integer vectors: Logical and integerish double vectors.
+#' * To double vectors: Logical and integer vectors.
+#' * To complex vectors: Logical, integer and double vectors.
+#'
+#'
+#' @section Coercion to character vectors:
+#'
+#' `as_character()` and `as_string()` have an optional `encoding`
+#' argument to specify the encoding. R uses this information for
+#' internal handling of strings and character vectors. Note that this
+#' is only declarative, no encoding conversion is attempted. See
+#' [as_utf8_character()] and [as_native_character()] for coercing to a
+#' character vector and attempt encoding conversion.
+#'
+#' See also [set_chr_encoding()] and [mut_utf8_locale()] for
+#' information about encodings and locales in R, and [string()] and
+#' [chr()] for other ways of creating strings and character vectors.
+#'
+#' Note that only `as_string()` can coerce symbols to a scalar
+#' character vector. This makes the code more explicit and adds an
+#' extra type check.
+#'
+#'
+#' @section Coercion to lists:
+#'
+#' `as_list()` only coerces vector and dictionary types (environments
+#' are an example of dictionary type). Unlike [base::as.list()],
+#' `as_list()` removes all attributes except names.
+#'
+#'
+#' @section Effects of removing attributes:
+#'
+#' A technical side-effect of removing the attributes of the input is
+#' that the underlying objects has to be copied. This has no
+#' performance implications in the case of lists because this is a
+#' shallow copy: only the list structure is copied, not the contents
+#' (see [duplicate()]). However, be aware that atomic vectors
+#' containing large amounts of data will have to be copied.
+#'
+#' In general, any attribute modification creates a copy, which is why
+#' it is better to avoid using attributes with heavy atomic vectors.
+#' Uncopyable objects like environments and symbols are an exception
+#' to this rule: in this case, attributes modification happens in
+#' place and has side-effects.
+#'
+#' @inheritParams string
+#' @param x An object to coerce to a base type.
+#' @examples
+#' # Coercing atomic vectors removes attributes with both base R and rlang:
+#' x <- structure(TRUE, class = "foo", bar = "baz")
+#' as.logical(x)
+#'
+#' # But coercing lists preserves attributes in base R but not rlang:
+#' l <- structure(list(TRUE), class = "foo", bar = "baz")
+#' as.list(l)
+#' as_list(l)
+#'
+#' # Implicit conversions are performed in base R but not rlang:
+#' as.logical(l)
+#' \dontrun{
+#' as_logical(l)
+#' }
+#'
+#' # Conversion methods are bypassed, making the result of the
+#' # coercion more predictable:
+#' as.list.foo <- function(x) "wrong"
+#' as.list(l)
+#' as_list(l)
+#'
+#' # The input is never parsed. E.g. character vectors of numbers are
+#' # not converted to numeric types:
+#' as.integer("33")
+#' \dontrun{
+#' as_integer("33")
+#' }
+#'
+#'
+#' # With base R tools there is no way to convert an environment to a
+#' # list without either triggering method dispatch, or changing the
+#' # original environment. as_list() makes it easy:
+#' x <- structure(as_env(mtcars[1:2]), class = "foobar")
+#' as.list.foobar <- function(x) abort("dont call me")
+#' as_list(x)
+#' @name vector-coercion
+NULL
+
+#' @rdname vector-coercion
+#' @export
+as_logical <- function(x) {
+  coerce_type_vec(x, friendly_type("logical"),
+    logical = set_attrs(x, NULL),
+    integer = as_base_type(x, as.logical),
+    double = as_integerish_type(x, as.logical, "logical")
+  )
+}
+#' @rdname vector-coercion
+#' @export
+as_integer <- function(x) {
+  coerce_type_vec(x, friendly_type("integer"),
+    logical = as_base_type(x, as.integer),
+    integer = set_attrs(x, NULL),
+    double = as_integerish_type(x, as.integer, "integer")
+  )
+}
+#' @rdname vector-coercion
+#' @export
+as_double <- function(x) {
+  coerce_type_vec(x, friendly_type("double"),
+    logical = ,
+    integer = as_base_type(x, as.double),
+    double = set_attrs(x, NULL)
+  )
+}
+#' @rdname vector-coercion
+#' @export
+as_complex <- function(x) {
+  coerce_type_vec(x, friendly_type("complex"),
+    logical = ,
+    integer = ,
+    double = as_base_type(x, as.complex),
+    complex = set_attrs(x, NULL)
+  )
+}
+#' @rdname vector-coercion
+#' @export
+as_character <- function(x, encoding = NULL) {
+  coerce_type_vec(x, friendly_type("character"),
+    string = ,
+    character = set_chr_encoding(set_attrs(x, NULL), encoding)
+  )
+}
+#' @rdname vector-coercion
+#' @export
+as_string <- function(x, encoding = NULL) {
+  x <- coerce_type(x, friendly_type("string"),
+    symbol = {
+      if (!is.null(encoding)) {
+        warn("`encoding` argument ignored for symbols")
+      }
+      .Call(rlang_symbol_to_character, x)
+    },
+    string = set_attrs(x, NULL)
+  )
+  set_chr_encoding(x, encoding)
+}
+#' @rdname vector-coercion
+#' @export
+as_list <- function(x) {
+  switch_type(x,
+    environment = env_as_list(x),
+    vec_as_list(x)
+  )
+}
+env_as_list <- function(x) {
+  names_x <- names(x)
+  x <- as_base_type(x, as.list)
+  set_names(x, .Call(rlang_unescape_character, names_x))
+}
+vec_as_list <- function(x) {
+  coerce_type_vec(x, friendly_type("list"),
+    logical = ,
+    integer = ,
+    double = ,
+    string = ,
+    character = ,
+    complex = ,
+    raw = as_base_type(x, as.list),
+    list = set_attrs(x, NULL)
+  )
+}
+
+as_base_type <- function(x, as_type) {
+  # Zap attributes temporarily instead of unclassing. We want to avoid
+  # method dispatch, but we also want to avoid an extra copy of atomic
+  # vectors: the first when unclassing, the second when coercing. This
+  # is also useful for uncopyable types like environments.
+  attrs <- .Call(rlang_get_attrs, x)
+  .Call(rlang_zap_attrs, x)
+
+  # This function assumes that the target type is different than the
+  # input type, otherwise no duplication is done and the output will
+  # be modified by side effect when we restore the input attributes.
+  on.exit(.Call(rlang_set_attrs, x, attrs))
+
+  as_type(x)
+}
+as_integerish_type <- function(x, as_type, to) {
+  if (is_integerish(x)) {
+    as_base_type(x, as_type)
+  } else {
+    abort(paste0(
+      "Can't convert a fractional double vector to ", friendly_type(to), ""
+    ))
+  }
+}
+
+coerce_type_vec <- function(.x, .to, ...) {
+  # Cannot reuse coerce_type() because switch() has a bug with
+  # fallthrough and multiple levels of dots forwarding.
+  out <- switch(type_of(.x), ..., abort_coercion(.x, .to))
+
+  if (!is_null(names(.x))) {
+    # Avoid a copy of `out` when we restore the names, since it could be
+    # a heavy atomic vector. We own `out`, so it is ok to change its
+    # attributes inplace.
+    .Call(rlang_set_attrs, out, pairlist(names = names(.x)))
+  }
+
+  out
+}
diff --git a/R/vector-ctor.R b/R/vector-ctor.R
new file mode 100644
index 0000000..a280d8c
--- /dev/null
+++ b/R/vector-ctor.R
@@ -0,0 +1,251 @@
+#' Create vectors
+#'
+#' The atomic vector constructors are equivalent to [c()] but allow
+#' you to be more explicit about the output type. Implicit coercions
+#' (e.g. from integer to logical) follow the rules described in
+#' [vector-coercion]. In addition, all constructors support splicing:
+#' if you supply [bare][is_bare_list] lists or [explicitly
+#' spliced][is_spliced] lists, their contents are spliced into the
+#' output vectors (see below for details). `ll()` is a list
+#' constructor similar to [base::list()] but with splicing semantics.
+#'
+#' @section Splicing:
+#'
+#' Splicing is an operation similar to flattening one level of nested
+#' lists, e.g. with \code{\link[=unlist]{base::unlist(x, recursive =
+#' FALSE)}} or `purrr::flatten()`. `ll()` returns its arguments as a
+#' list, just like `list()` would, but inner lists qualifying for
+#' splicing are flattened. That is, their contents are embedded in the
+#' surrounding list. Similarly, `chr()` concatenates its arguments and
+#' returns them as a single character vector, but inner lists are
+#' flattened before concatenation.
+#'
+#' Whether an inner list qualifies for splicing is determined by the
+#' type of splicing semantics. All the atomic constructors like
+#' `chr()` have _list splicing_ semantics: [bare][is_bare_list] lists
+#' and [explicitly spliced][is_spliced] lists are spliced.
+#'
+#' There are two list constructors with different splicing
+#' semantics. `ll()` only splices lists explicitly marked with
+#' [splice()].
+#'
+#' @param ... Components of the new vector. Bare lists and explicitly
+#'   spliced lists are spliced.
+#' @name vector-construction
+#' @seealso [ll()]
+#' @examples
+#' # These constructors are like a typed version of c():
+#' c(TRUE, FALSE)
+#' lgl(TRUE, FALSE)
+#'
+#' # They follow a restricted set of coercion rules:
+#' int(TRUE, FALSE, 20)
+#'
+#' # Lists can be spliced:
+#' dbl(10, list(1, 2L), TRUE)
+#'
+#'
+#' # They splice names a bit differently than c(). The latter
+#' # automatically composes inner and outer names:
+#' c(a = c(A = 10), b = c(B = 20, C = 30))
+#'
+#' # On the other hand, rlang's ctors use the inner names and issue a
+#' # warning to inform the user that the outer names are ignored:
+#' dbl(a = c(A = 10), b = c(B = 20, C = 30))
+#' dbl(a = c(1, 2))
+#'
+#' # As an exception, it is allowed to provide an outer name when the
+#' # inner vector is an unnamed scalar atomic:
+#' dbl(a = 1)
+#'
+#' # Spliced lists behave the same way:
+#' dbl(list(a = 1))
+#' dbl(list(a = c(A = 1)))
+NULL
+
+#' @rdname vector-construction
+#' @export
+lgl <- function(...) {
+  .Call(rlang_squash, dots_values(...), "logical", is_spliced_bare, 1L)
+}
+#' @rdname vector-construction
+#' @export
+int <- function(...) {
+  .Call(rlang_squash, dots_values(...), "integer", is_spliced_bare, 1L)
+}
+#' @rdname vector-construction
+#' @export
+dbl <- function(...) {
+  .Call(rlang_squash, dots_values(...), "double", is_spliced_bare, 1L)
+}
+#' @rdname vector-construction
+#' @export
+cpl <- function(...) {
+  .Call(rlang_squash, dots_values(...), "complex", is_spliced_bare, 1L)
+}
+#' @rdname vector-construction
+#' @export
+#' @param .encoding If non-null, passed to [set_chr_encoding()] to add
+#'   an encoding mark. This is only declarative, no encoding
+#'   conversion is performed.
+#' @export
+chr <- function(..., .encoding = NULL) {
+  out <- .Call(rlang_squash, dots_values(...), "character", is_spliced_bare, 1L)
+  set_chr_encoding(out, .encoding)
+}
+#' @rdname vector-construction
+#' @export
+#' @examples
+#'
+#' # bytes() accepts integerish inputs
+#' bytes(1:10)
+#' bytes(0x01, 0xff, c(0x03, 0x05), list(10, 20, 30L))
+bytes <- function(...) {
+  dots <- map(dots_values(...), function(dot) {
+    if (is_bare_list(dot) || is_spliced(dot)) {
+      map(dot, new_bytes)
+    } else {
+      new_bytes(dot)
+    }
+  })
+  .Call(rlang_squash, dots, "raw", is_spliced_bare, 1L)
+}
+
+#' @rdname vector-construction
+#' @export
+#' @examples
+#'
+#' # The list constructor has explicit splicing semantics:
+#' ll(1, list(2))
+#'
+#' # Note that explicitly spliced lists are always spliced:
+#' ll(!!! list(1, 2))
+ll <- function(...) {
+  .Call(rlang_squash, dots_values(...), "list", is_spliced, 1L)
+}
+
+
+#' Create vectors matching the length of a given vector
+#'
+#' These functions take the idea of [seq_along()] and generalise it to
+#' creating lists (`list_along`) and repeating values (`rep_along`).
+#' Except for `list_along()` and `raw_along()`, the empty vectors are
+#' filled with typed `missing` values.
+#'
+#' @inheritParams set_attrs
+#' @param .x A vector.
+#' @param .y Values to repeat.
+#' @examples
+#' x <- 0:5
+#' rep_along(x, 1:2)
+#' rep_along(x, 1)
+#' list_along(x)
+#' @name vector-along
+#' @seealso vector-len
+NULL
+
+#' @export
+#' @rdname vector-along
+lgl_along <- function(.x) {
+  rep_len(na_lgl, length(.x))
+}
+#' @export
+#' @rdname vector-along
+int_along <- function(.x) {
+  rep_len(na_int, length(.x))
+}
+#' @export
+#' @rdname vector-along
+dbl_along <- function(.x) {
+  rep_len(na_dbl, length(.x))
+}
+#' @export
+#' @rdname vector-along
+chr_along <- function(.x) {
+  rep_len(na_chr, length(.x))
+}
+#' @export
+#' @rdname vector-along
+cpl_along <- function(.x) {
+  rep_len(na_cpl, length(.x))
+}
+#' @export
+#' @rdname vector-along
+raw_along <- function(.x) {
+  vector("raw", length(.x))
+}
+#' @export
+#' @rdname vector-along
+bytes_along <- function(.x) {
+  vector("raw", length(.x))
+}
+#' @export
+#' @rdname vector-along
+list_along <- function(.x) {
+  vector("list", length(.x))
+}
+
+#' @export
+#' @rdname vector-along
+rep_along <- function(.x, .y) {
+  rep(.y, length.out = length(.x))
+}
+
+
+#' Create vectors matching a given length
+#'
+#' These functions construct vectors of given length, with attributes
+#' specified via dots. Except for `list_len()` and `bytes_len()`, the
+#' empty vectors are filled with typed [missing] values. This is in
+#' contrast to the base function [base::vector()] which creates
+#' zero-filled vectors.
+#'
+#' @inheritParams set_attrs
+#' @param .n The vector length.
+#' @examples
+#' list_len(10)
+#' lgl_len(10)
+#' @name vector-len
+#' @seealso vector-along
+NULL
+
+#' @export
+#' @rdname vector-len
+lgl_len <- function(.n) {
+  rep_len(na_lgl, .n)
+}
+#' @export
+#' @rdname vector-len
+int_len <- function(.n) {
+  rep_len(na_int, .n)
+}
+#' @export
+#' @rdname vector-len
+dbl_len <- function(.n) {
+  rep_len(na_dbl, .n)
+}
+#' @export
+#' @rdname vector-len
+chr_len <- function(.n) {
+  rep_len(na_chr, .n)
+}
+#' @export
+#' @rdname vector-len
+cpl_len <- function(.n) {
+  rep_len(na_cpl, .n)
+}
+#' @export
+#' @rdname vector-len
+raw_len <- function(.n) {
+  vector("raw", .n)
+}
+#' @export
+#' @rdname vector-len
+bytes_len <- function(.n) {
+  vector("raw", .n)
+}
+#' @export
+#' @rdname vector-len
+list_len <- function(.n) {
+  vector("list", .n)
+}
diff --git a/R/vector-missing.R b/R/vector-missing.R
new file mode 100644
index 0000000..ad0555a
--- /dev/null
+++ b/R/vector-missing.R
@@ -0,0 +1,122 @@
+#' Missing values
+#'
+#' Missing values are represented in R with the general symbol
+#' `NA`. They can be inserted in almost all data containers: all
+#' atomic vectors except raw vectors can contain missing values. To
+#' achieve this, R automatically converts the general `NA` symbol to a
+#' typed missing value appropriate for the target vector. The objects
+#' provided here are aliases for those typed `NA` objects.
+#'
+#' Typed missing values are necessary because R needs sentinel values
+#' of the same type (i.e. the same machine representation of the data)
+#' as the containers into which they are inserted. The official typed
+#' missing values are `NA_integer_`, `NA_real_`, `NA_character_` and
+#' `NA_complex_`. The missing value for logical vectors is simply the
+#' default `NA`. The aliases provided in rlang are consistently named
+#' and thus simpler to remember. Also, `na_lgl` is provided as an
+#' alias to `NA` that makes intent clearer.
+#'
+#' Since `na_lgl` is the default `NA`, expressions such as `c(NA, NA)`
+#' yield logical vectors as no data is available to give a clue of the
+#' target type. In the same way, since lists and environments can
+#' contain any types, expressions like `list(NA)` store a logical
+#' `NA`.
+#'
+#' @seealso The [vector-along] family to create typed vectors filled
+#'   with missing values.
+#' @examples
+#' typeof(NA)
+#' typeof(na_lgl)
+#' typeof(na_int)
+#'
+#' # Note that while the base R missing symbols cannot be overwritten,
+#' # that's not the case for rlang's aliases:
+#' na_dbl <- NA
+#' typeof(na_dbl)
+#' @name missing
+NULL
+
+#' @rdname missing
+#' @export
+na_lgl <- NA
+#' @rdname missing
+#' @export
+na_int <- NA_integer_
+#' @rdname missing
+#' @export
+na_dbl <- NA_real_
+#' @rdname missing
+#' @export
+na_chr <- NA_character_
+#' @rdname missing
+#' @export
+na_cpl <- NA_complex_
+
+
+#' Test for missing values
+#'
+#' `are_na()` checks for missing values in a vector and is equivalent
+#' to [base::is.na()]. It is a vectorised predicate, meaning that its
+#' output is always the same length as its input. On the other hand,
+#' `is_na()` is a scalar predicate and always returns a scalar
+#' boolean, `TRUE` or `FALSE`. If its input is not scalar, it returns
+#' `FALSE`. Finally, there are typed versions that check for
+#' particular [missing types][missing].
+#'
+#' The scalar predicates accept non-vector inputs. They are equivalent
+#' to [is_null()] in that respect. In contrast the vectorised
+#' predicate `are_na()` requires a vector input since it is defined
+#' over vector values.
+#'
+#' @param x An object to test
+#' @export
+#' @examples
+#' # are_na() is vectorised and works regardless of the type
+#' are_na(c(1, 2, NA))
+#' are_na(c(1L, NA, 3L))
+#'
+#' # is_na() checks for scalar input and works for all types
+#' is_na(NA)
+#' is_na(na_dbl)
+#' is_na(character(0))
+#'
+#' # There are typed versions as well:
+#' is_lgl_na(NA)
+#' is_lgl_na(na_dbl)
+are_na <- function(x) {
+  if (!is_vector(x)) {
+    abort("`x` must be a vector")
+  }
+  is.na(x)
+}
+#' @rdname are_na
+#' @export
+is_na <- function(x) {
+  is_scalar_vector(x) && is.na(x)
+}
+
+#' @rdname are_na
+#' @export
+is_lgl_na <- function(x) {
+  identical(x, na_lgl)
+}
+#' @rdname are_na
+#' @export
+is_int_na <- function(x) {
+  identical(x, na_int)
+}
+#' @rdname are_na
+#' @export
+is_dbl_na <- function(x) {
+  identical(x, na_dbl)
+}
+#' @rdname are_na
+#' @export
+is_chr_na <- function(x) {
+  identical(x, na_chr)
+}
+#' @rdname are_na
+#' @export
+is_cpl_na <- function(x) {
+  identical(x, na_cpl)
+}
diff --git a/R/vector-raw.R b/R/vector-raw.R
new file mode 100644
index 0000000..55c1e98
--- /dev/null
+++ b/R/vector-raw.R
@@ -0,0 +1,26 @@
+
+new_bytes <- function(x) {
+  if (is_integerish(x)) {
+    as.raw(x)
+  } else if (is_raw(x)) {
+    x
+  } else {
+    abort("input should be integerish")
+  }
+}
+
+#' Coerce to a raw vector
+#'
+#' This currently only works with strings, and returns its hexadecimal
+#' representation.
+#'
+#' @param x A string.
+#' @return A raw vector of bytes.
+#' @export
+as_bytes <- function(x) {
+  switch(typeof(x),
+    raw = return(x),
+    character = if (is_string(x)) return(charToRaw(x))
+  )
+  abort("`x` must be a string or raw vector")
+}
diff --git a/R/vector-squash.R b/R/vector-squash.R
new file mode 100644
index 0000000..f7de4d0
--- /dev/null
+++ b/R/vector-squash.R
@@ -0,0 +1,186 @@
+#' Splice a list within a vector
+#'
+#' This adjective signals to functions taking dots that `x` should be
+#' spliced in a surrounding vector. Examples of functions that support
+#' such explicit splicing are [ll()], [chr()], etc. Generally, any
+#' functions taking dots with [dots_list()] or [dots_splice()]
+#' supports splicing.
+#'
+#' Note that all functions supporting dots splicing also support the
+#' syntactic operator `!!!`. For tidy capture and tidy evaluation,
+#' this operator directly manipulates the calls (see [quo()] and
+#' [quasiquotation]). However manipulating the call is not appropriate
+#' when taking dots by value rather than by expression, because it is
+#' slow and the dots might contain large lists of data. For this
+#' reason we splice values rather than expressions when dots are not
+#' captured by expression. We do it in two steps: first mark the
+#' objects to be spliced, then splice the objects with [flatten()].
+#'
+#' @param x A list to splice.
+#' @seealso [vector-construction]
+#' @export
+#' @examples
+#' x <- list("a")
+#'
+#' # It makes sense for ll() to accept lists literally, so it doesn't
+#' # automatically splice them:
+#' ll(x)
+#'
+#' # But you can splice lists explicitly:
+#' y <- splice(x)
+#' ll(y)
+#'
+#' # Or with the syntactic shortcut:
+#' ll(!!! x)
+splice <- function(x) {
+  if (!is_list(x)) {
+    abort("Only lists can be spliced")
+  }
+  structure(x, class = "spliced")
+}
+#' @rdname splice
+#' @export
+is_spliced <- function(x) {
+  inherits(x, "spliced")
+}
+#' @rdname splice
+#' @export
+is_spliced_bare <- function(x) {
+  is_bare_list(x) || is_spliced(x)
+}
+
+#' Flatten or squash a list of lists into a simpler vector
+#'
+#' `flatten()` removes one level hierarchy from a list, while
+#' `squash()` removes all levels. These functions are similar to
+#' [unlist()] but they are type-stable so you always know what the
+#' type of the output is.
+#'
+#' @param x A list of flatten or squash. The contents of the list can
+#'   be anything for unsuffixed functions `flatten()` and `squash()`
+#'   (as a list is returned), but the contents must match the type for
+#'   the other functions.
+#' @return `flatten()` returns a list, `flatten_lgl()` a logical
+#'   vector, `flatten_int()` an integer vector, `flatten_dbl()` a
+#'   double vector, and `flatten_chr()` a character vector. Similarly
+#'   for `squash()` and the typed variants (`squash_lgl()` etc).
+#' @export
+#' @examples
+#' x <- replicate(2, sample(4), simplify = FALSE)
+#' x
+#'
+#' flatten(x)
+#' flatten_int(x)
+#'
+#' # With flatten(), only one level gets removed at a time:
+#' deep <- list(1, list(2, list(3)))
+#' flatten(deep)
+#' flatten(flatten(deep))
+#'
+#' # But squash() removes all levels:
+#' squash(deep)
+#' squash_dbl(deep)
+#'
+#' # The typed flattens remove one level and coerce to an atomic
+#' # vector at the same time:
+#' flatten_dbl(list(1, list(2)))
+#'
+#' # Only bare lists are flattened, but you can splice S3 lists
+#' # explicitly:
+#' foo <- set_attrs(list("bar"), class = "foo")
+#' str(flatten(list(1, foo, list(100))))
+#' str(flatten(list(1, splice(foo), list(100))))
+#'
+#' # Instead of splicing manually, flatten_if() and squash_if() let
+#' # you specify a predicate function:
+#' is_foo <- function(x) inherits(x, "foo") || is_bare_list(x)
+#' str(flatten_if(list(1, foo, list(100)), is_foo))
+#'
+#' # squash_if() does the same with deep lists:
+#' deep_foo <- list(1, list(foo, list(foo, 100)))
+#' str(deep_foo)
+#'
+#' str(squash(deep_foo))
+#' str(squash_if(deep_foo, is_foo))
+flatten <- function(x) {
+  .Call(rlang_squash, x, "list", is_spliced_bare, 1L)
+}
+#' @rdname flatten
+#' @export
+flatten_lgl <- function(x) {
+  .Call(rlang_squash, x, "logical", is_spliced_bare, 1L)
+}
+#' @rdname flatten
+#' @export
+flatten_int <- function(x) {
+  .Call(rlang_squash, x, "integer", is_spliced_bare, 1L)
+}
+#' @rdname flatten
+#' @export
+flatten_dbl <- function(x) {
+  .Call(rlang_squash, x, "double", is_spliced_bare, 1L)
+}
+#' @rdname flatten
+#' @export
+flatten_cpl <- function(x) {
+  .Call(rlang_squash, x, "complex", is_spliced_bare, 1L)
+}
+#' @rdname flatten
+#' @export
+flatten_chr <- function(x) {
+  .Call(rlang_squash, x, "character", is_spliced_bare, 1L)
+}
+#' @rdname flatten
+#' @export
+flatten_raw <- function(x) {
+  .Call(rlang_squash, x, "raw", is_spliced_bare, 1L)
+}
+
+#' @rdname flatten
+#' @export
+squash <- function(x) {
+  .Call(rlang_squash, x, "list", is_spliced_bare, -1L)
+}
+#' @rdname flatten
+#' @export
+squash_lgl <- function(x) {
+  .Call(rlang_squash, x, "logical", is_spliced_bare, -1L)
+}
+#' @rdname flatten
+#' @export
+squash_int <- function(x) {
+  .Call(rlang_squash, x, "integer", is_spliced_bare, -1L)
+}
+#' @rdname flatten
+#' @export
+squash_dbl <- function(x) {
+  .Call(rlang_squash, x, "double", is_spliced_bare, -1L)
+}
+#' @rdname flatten
+#' @export
+squash_cpl <- function(x) {
+  .Call(rlang_squash, x, "complex", is_spliced_bare, -1L)
+}
+#' @rdname flatten
+#' @export
+squash_chr <- function(x) {
+  .Call(rlang_squash, x, "character", is_spliced_bare, -1L)
+}
+#' @rdname flatten
+#' @export
+squash_raw <- function(x) {
+  .Call(rlang_squash, x, "raw", is_spliced_bare, -1L)
+}
+
+#' @rdname flatten
+#' @param predicate A function of one argument returning whether it
+#'   should be spliced.
+#' @export
+flatten_if <- function(x, predicate = is_spliced) {
+  .Call(rlang_squash, x, "list", predicate, 1L)
+}
+#' @rdname flatten
+#' @export
+squash_if <- function(x, predicate = is_spliced) {
+  .Call(rlang_squash, x, "list", predicate, -1L)
+}
diff --git a/R/vector-utils.R b/R/vector-utils.R
new file mode 100644
index 0000000..39dd5d5
--- /dev/null
+++ b/R/vector-utils.R
@@ -0,0 +1,93 @@
+#' Prepend a vector
+#'
+#' This is a companion to [base::append()] to help merging two lists
+#' or atomic vectors. `prepend()` is a clearer semantic signal than
+#' `c()` that a vector is to be merged at the beginning of another,
+#' especially in a pipe chain.
+#'
+#' @param x the vector to be modified.
+#' @param values to be included in the modified vector.
+#' @param before a subscript, before which the values are to be appended.
+#' @return A merged vector.
+#' @export
+#' @examples
+#' x <- as.list(1:3)
+#'
+#' append(x, "a")
+#' prepend(x, "a")
+#' prepend(x, list("a", "b"), before = 3)
+prepend <- function(x, values, before = 1) {
+  n <- length(x)
+  stopifnot(before > 0 && before <= n)
+
+  if (before == 1) {
+    c(values, x)
+  } else {
+    c(x[1:(before - 1)], values, x[before:n])
+  }
+}
+
+#' Modify a vector
+#'
+#' This function merges a list of arguments into a vector. It always
+#' returns a list.
+#'
+#' @param .x A vector to modify.
+#' @param ... List of elements to merge into `.x`. Named elements
+#'   already existing in `.x` are used as replacements. Elements that
+#'   have new or no names are inserted at the end. These dots are
+#'   evaluated with [explicit splicing][dots_list].
+#' @return A modified vector upcasted to a list.
+#' @export
+#' @examples
+#' modify(c(1, b = 2, 3), 4, b = "foo")
+#'
+#' x <- list(a = 1, b = 2)
+#' y <- list(b = 3, c = 4)
+#' modify(x, splice(y))
+modify <- function(.x, ...) {
+  out <- as.list(.x)
+  args <- dots_list(...)
+
+  args_nms <- names(args)
+  exists <- have_name(args) & args_nms %in% names(out)
+
+  for (nm in args_nms[exists]) {
+    out[[nm]] <- args[[nm]]
+  }
+
+  c(out, args[!exists])
+}
+
+#' Increasing sequence of integers in an interval
+#'
+#' These helpers take two endpoints and return the sequence of all
+#' integers within that interval. For `seq2_along()`, the upper
+#' endpoint is taken from the length of a vector. Unlike
+#' `base::seq()`, they return an empty vector if the starting point is
+#' a larger integer than the end point.
+#'
+#' @param from The starting point of the sequence.
+#' @param to The end point.
+#' @param x A vector whose length is the end point.
+#' @return An integer vector containing a strictly increasing
+#'   sequence.
+#' @export
+#' @examples
+#' seq2(2, 10)
+#' seq2(10, 2)
+#' seq(10, 2)
+#'
+#' seq2_along(10, letters)
+seq2 <- function(from, to) {
+  if (from > to) {
+    int()
+  } else {
+    seq.int(from, to)
+  }
+}
+#' @rdname seq2
+#' @export
+seq2_along <- function(from, x) {
+  seq2(from, length(x))
+}
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..d43eccf
--- /dev/null
+++ b/README.md
@@ -0,0 +1,49 @@
+# rlang
+
+[![Build Status](https://travis-ci.org/tidyverse/rlang.svg?branch=master)](https://travis-ci.org/tidyverse/rlang)
+
+## Overview
+
+The rlang package provides tools to work with core language features
+of R and the tidyverse:
+
+*   The __tidyeval__ framework, which is a well-founded system for non-standard
+    evaluation built on quasiquotation (`UQ()`) and quosures (`quo()`). 
+    Read more in `vignette("tidy-evaluation")`.
+
+*   Consistent tools for working with base types:
+    
+    * Vectors, including construction (`lgl()`, `int()`, ...)
+      coercion (`as_logical()`, `as_character()`, ...), and
+      predicates (`is_logical()`, `is_character()`).
+      
+    * Language objects, such as calls (`lang()`) and symbols (`sym()`).
+    
+    * Attributes, e.g. `set_attrs()`, `set_names()`.
+    
+    * Functions, e.g. `new_function()`, `as_function()`, `is_function()`.
+    
+    * Environments, e.g. `env()`, `env_has()`, `env_get()`, `env_bind()`,
+      `env_unbind()`.
+
+*   A comprehensive set of predicates to determine if an object satisfies 
+    various conditions, e.g. `has_length()`, `is_list()`, `is_empty()`.
+    
+*   The condition (message, warning, error) and restart system.
+
+*   Call and context stacks.
+
+## Installation
+
+You can install the released version of rlang from CRAN with:
+
+```r
+install.packages("rlang")
+```
+
+Or install the development version from github with:
+
+```r
+# install.packages("devtools")
+devtools::install_github("tidyverse/rlang", build_vignettes = TRUE)
+```
diff --git a/build/vignette.rds b/build/vignette.rds
new file mode 100644
index 0000000..1df215f
Binary files /dev/null and b/build/vignette.rds differ
diff --git a/inst/doc/tidy-evaluation.R b/inst/doc/tidy-evaluation.R
new file mode 100644
index 0000000..06e73d9
--- /dev/null
+++ b/inst/doc/tidy-evaluation.R
@@ -0,0 +1,28 @@
+## ---- include = FALSE----------------------------------------------------
+knitr::opts_chunk$set(collapse = T, comment = "#>")
+library("rlang")
+
+## ---- eval = FALSE-------------------------------------------------------
+#  # Taking an expression:
+#  dplyr::mutate(mtcars, cyl2 = cyl * 2)
+#  
+#  # Taking a value:
+#  var <- mtcars$cyl * 2
+#  dplyr::mutate(mtcars, cyl2 = !! var)
+
+## ---- eval = FALSE-------------------------------------------------------
+#  # Taking a symbol:
+#  dplyr::select(mtcars, cyl)
+#  
+#  # Taking an unquoted symbol:
+#  var <- quote(sym)
+#  dplyr::select(mtcars, !! var)
+
+## ---- eval = FALSE-------------------------------------------------------
+#  # Taking a column position:
+#  dplyr::select(mtcars, 2)
+#  
+#  # Taking an unquoted column position:
+#  var <- 2
+#  dplyr::select(mtcars, !! var)
+
diff --git a/inst/doc/tidy-evaluation.Rmd b/inst/doc/tidy-evaluation.Rmd
new file mode 100644
index 0000000..0bb86fc
--- /dev/null
+++ b/inst/doc/tidy-evaluation.Rmd
@@ -0,0 +1,263 @@
+---
+title: "Tidy evaluation"
+output: rmarkdown::html_vignette
+vignette: >
+  %\VignetteIndexEntry{Tidy evaluation}
+  %\VignetteEngine{knitr::rmarkdown}
+  \usepackage[utf8]{inputenc}
+---
+
+```{r, include = FALSE}
+knitr::opts_chunk$set(collapse = T, comment = "#>")
+library("rlang")
+```
+
+Tidy evaluation is is a general toolkit for non-standard evaluation,  principally used to create domain-specific languages of grammars. The most prominent examples of such sublanguages in R are modelling specifications with formulas (`lm()`,
+`lme4::lmer()`, etc) and data manipulation grammars (dplyr,
+tidyr). Most of these DSLs put dataframe columns in scope so that
+users can refer to them directly, saving keystrokes during interactive
+analysis and creating easily readable code.
+
+R makes it easy to create DSLs thanks to three features of the
+language:
+
+- R code is first-class. That is, R code can be manipulated like
+  any other object (see `sym()`, `lang()`, and `node()`). We use
+  the term __expression__ (see `is_expr()`) to refer to objects that
+  are created by parsing R code.
+
+- Scope is first-class. Scope is the lexical environment that
+  associates values to symbols in expressions. Unlike like most 
+  languages, environments can be created (see `env()`) and manipulated 
+  as regular objects.
+
+- Finally, functions can capture the expressions that were supplied
+  as arguments instead of being passed the value of these
+  expressions (see `enquo()` and `enexpr()`).
+
+R functions can capture expressions, manipulate them like
+regular objects, and alter the meaning of symbols referenced in these
+expressions by changing the scope (the environment) in which they
+are evaluated. This combination of features allow R packages to change
+the meaning of R code and create domain-specific sublanguages.
+
+Tidy evaluation is an opinionated way to use these
+features to create consistent DSLs. The main principle is that
+sublanguages should feel and behave like R code. They change the
+meaning of R code, but only in a precise and circumscribed way,
+behaving otherwise predictably and in accordance with R semantics. As
+a result, users are be able to leverage their existing knowledge of R
+programming to solve problems involving the sublanguage in ways that
+were not necessarily envisioned or planned by their designers.
+
+## Parsing versus evaluation
+
+There are two ways of dealing with unevaluated expressions to create a
+sublanguage. The first is to parse the expression and modify it, and 
+the other is to leave the expression as is and evaluate it in a
+modified environment.
+
+Let's take the example of designing a modelling DSL to illustrate
+parsing. You would need to traverse the call and analyse all functions
+encountered in the expression (in particular, operators like `+` or
+`:`), building a data structure describing a model as you go. This
+method of dealing with expressions is complex, rigid, and error prone
+because you're basically writing an interpreter for R code. It is
+extremely difficult to emulate R semantics when parsing an expression:
+does a function take arguments by value or by expression? Can I parse
+these arguments? Do these symbols mean the same thing in this context?
+Will this argument be evaluated immediately or later on lazily? Given
+the difficulty of getting it right, parsing should be a last resort.
+
+The second way is to rely on evaluation in a specific environment.
+The expression is evaluated in an environment where certain objects
+and functions are given special definitions. For instance `+` might be
+defined as accumulating vectors in a data structure to build a design
+matrix later on, or we might put helper functions in scope (an example
+is `dplyr::select()`). As this method is relying on the R interpreter,
+the grammar is much more likely to behave like real R code.
+
+R DSLs are traditionally implemented with a mix of both principles.
+Expressions are parsed in ad hoc ways, but are eventually evaluated in
+an environment containing dataframe columns. While it is difficult to
+completely avoid ad hoc parsing, tidyeval DSLs strive to rely on
+evaluation as much as possible.
+
+## Values versus expressions
+
+A corollary of emphasising evaluation is that your DSL functions
+should understand _values_ in addition to expressions. This is
+especially important with
+[quasiquotation][quasiquotation]:
+users can bypass symbolic evaluation completely by unquoting values. For
+instance, the following expressions are completely equivalent:
+
+```{r, eval = FALSE}
+# Taking an expression:
+dplyr::mutate(mtcars, cyl2 = cyl * 2)
+
+# Taking a value:
+var <- mtcars$cyl * 2
+dplyr::mutate(mtcars, cyl2 = !! var)
+```
+
+`dplyr::mutate()` evaluates expressions in a context where dataframe
+columns are in scope, but it accepts any value that can be treated as
+a column (a recycled scalar or a vector as long as there are rows).
+
+A more complex example is `dplyr::select()`. This function evaluates
+dataframe columns in a context where they represent column
+positions. Therefore, `select()` understands column symbols like
+`cyl`:
+
+```{r, eval = FALSE}
+# Taking a symbol:
+dplyr::select(mtcars, cyl)
+
+# Taking an unquoted symbol:
+var <- quote(sym)
+dplyr::select(mtcars, !! var)
+```
+
+But it also understands column positions:
+
+```{r, eval = FALSE}
+# Taking a column position:
+dplyr::select(mtcars, 2)
+
+# Taking an unquoted column position:
+var <- 2
+dplyr::select(mtcars, !! var)
+```
+
+Understanding values in addition to expressions makes your grammar
+more consistent, predictable, and programmable.
+
+## Tidy scoping
+
+The special type of scoping found in R grammars implemented with
+evaluation poses some challenges. Both objects from a dataset and
+objects from the current environment should be in scope, with the
+former having precedence over the latter. In other words, the dataset
+should __overscope__ the dynamic context. The traditional solution to
+this issue in R is to transform a dataframe to an environment and set
+the calling frame as the parent environment.  This way, the symbols
+appearing in the expression can refer to their surrounding context in
+addition to dataframe columns. In other words, the grammar implements
+correctly an important aspect of R:
+[lexical scoping](http://adv-r.had.co.nz/Functions.html#lexical-scoping).
+
+Creating this scope hierarchy (data first, context next) is possible
+because R makes it easy to capture the calling environment (see
+[caller_env()]). However, this supposes that captured expressions were
+actually typed in the most immediate caller frame. This assumption
+easily breaks in R. First because quasiquotation allows an user to
+combine expressions that do not necessarily come from the same lexical
+context. Secondly because arguments can be forwarded through the
+special `...` argument.  While base R does not provide any way of
+capturing a forwarded argument along with its original environment,
+rlang features [quos()] for this purpose. This function looks up each
+forwarded arguments and returns a list of [quosures][quosure] that bundle the
+expressions with their own dynamic environments.
+
+In that context, maintaining scoping consistency is a challenge
+because we're dealing with multiple environments, one for each
+argument plus one containing the overscoped data. This creates
+difficulties regarding tidyeval's overarching principle that we should
+change R semantics through evaluation. It is possible to evaluate each
+expression in turn, but how can we combine all expressions into one
+and evaluate it tidily at once? An expression can only be evaluated in
+a single environment. This is where quosures come into play.
+
+
+## Quosures and overscoping
+
+Unlike formulas, quosures aren't simple containers of an expression
+and an environment. In the tidyeval framework, they have the property
+of self-evaluating in their own environment. Hence they can appear
+anywhere in an expression (e.g. by being
+[unquoted](http://rlang.tidyverse.org/reference/quasiquotation.html)),
+carrying their own environment and behaving otherwise exactly like
+surrounding R code. Quosures behave like
+reified
+[promises](http://adv-r.had.co.nz/Computing-on-the-language.html#capturing-expressions) that
+are unreified during tidy evaluation.
+
+However, the dynamic environments of quosures do not contain
+overscoped data. It's not of much use for sublanguages to get the
+contextual environment right if they can't also change the meaning
+of code quoted in quosures. To solve this issue, tidyeval rechains
+the overscope to a quosure just before it self-evaluates. This way,
+both the lexical environment and the overscoped data are in scope
+when the quosure is evaluated. It is evaluated tidily.
+
+In practical terms, `eval_tidy()` takes a `data` argument and
+creates an overscope suitable for tidy evaluation. In particular,
+these overscopes contain definitions for self-evaluation of
+quosures. See [eval_tidy_()] and [as_overscope] for more flexible
+ways of creating overscopes.
+
+## Theory
+
+The most important concept of the tidy evaluation framework is that
+expressions should be scoped in their dynamic context. This issue
+is linked to the computer science concept of _hygiene_, which
+roughly means that symbols should be scoped in their local context,
+the context where they are typed by the user. In a way, hygiene is
+what "tidy" refers to in "tidy evaluation".
+
+In languages with macros, hygiene comes up for [macro
+expansion](https://en.wikipedia.org/wiki/Hygienic_macro). While
+macros look like R's non-standard evaluation functions, and share
+certain concepts with them (in particular, they get their arguments
+as unevaluated code), they are actually quite different. Macros are
+compile-time and therefore can only operate on code and constants,
+never on user data. They also don't return a value but are expanded
+in place by the compiler. In comparison, R does not have macros but
+it has [fexprs](https://en.wikipedia.org/wiki/Fexpr), i.e. regular
+functions that get arguments as unevaluated expressions rather than
+by their value (fexprs are what we call NSE functions in the R
+community). Unlike macros, these functions execute at run-time and
+return a value.
+
+Symbolic hygiene is a problem for macros during expansion because
+expanded code might invisibly redefine surrounding symbols.
+Correspondingly, hygiene is an issue for NSE functions if the code
+they captured gets evaluated in the wrong
+environment. Historically, fexprs did not have this problem because
+they existed in languages with dynamic scoping. However in modern
+languages with lexical scoping, it is imperative to bundle quoted
+expressions with their dynamic environment. The most natural way
+to do this in R is to use formulas and quosures.
+
+While formulas were introduced in the S language, the quosure was
+invented much later for R [by Luke Tierney in
+2000](https://github.com/wch/r-source/commit/a945ac8e6a82617205442d44a2be3a497d2ac896).
+From that point on formulas recorded their environment along with
+the model terms. In the Lisp world, the Kernel Lisp language also
+recognised that arguments should be captured together with their
+dynamic environment in order to solve hygienic evaluation in the
+context of lexically scoped languages (see chapter 5 of [John
+Schutt's thesis](https://web.wpi.edu/Pubs/ETD/Available/etd-090110-124904/)).
+However, Kernel Lisp did not have quosures and avoided quotation or
+quasiquotation operators altogether to avoid scoping issues.
+
+Tidyeval contributes to the problem of hygienic evaluation in four ways:
+
+- Promoting the quosure as the proper quotation data structure, in
+  order to keep track of the dynamic environment of quoted
+  expressions.
+
+- Introducing systematic quasiquotation in all capturing functions
+  in order to make it straightforward to program with these
+  functions.
+
+- Treating quosures as reified promises that self-evaluate within
+  their own environments. This allows unquoting quosures within
+  other quosures, which is the key for programming hygienically
+  with capturing functions.
+
+- Building a moving overscope that rechains to quosures as they get
+  evaluated. This makes it possible to change the evaluation
+  context and at the same time take the lexical context of each
+  quosure into account.
diff --git a/inst/doc/tidy-evaluation.html b/inst/doc/tidy-evaluation.html
new file mode 100644
index 0000000..292bef1
--- /dev/null
+++ b/inst/doc/tidy-evaluation.html
@@ -0,0 +1,137 @@
+<!DOCTYPE html>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+
+<head>
+
+<meta charset="utf-8" />
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<meta name="generator" content="pandoc" />
+
+<meta name="viewport" content="width=device-width, initial-scale=1">
+
+
+
+<title>Tidy evaluation</title>
+
+
+
+<style type="text/css">code{white-space: pre;}</style>
+<style type="text/css">
+table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode {
+  margin: 0; padding: 0; vertical-align: baseline; border: none; }
+table.sourceCode { width: 100%; line-height: 100%; }
+td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; }
+td.sourceCode { padding-left: 5px; }
+code > span.kw { color: #007020; font-weight: bold; }
+code > span.dt { color: #902000; }
+code > span.dv { color: #40a070; }
+code > span.bn { color: #40a070; }
+code > span.fl { color: #40a070; }
+code > span.ch { color: #4070a0; }
+code > span.st { color: #4070a0; }
+code > span.co { color: #60a0b0; font-style: italic; }
+code > span.ot { color: #007020; }
+code > span.al { color: #ff0000; font-weight: bold; }
+code > span.fu { color: #06287e; }
+code > span.er { color: #ff0000; font-weight: bold; }
+</style>
+
+
+
+<link href="data:text/css,body%20%7B%0A%20%20background%2Dcolor%3A%20%23fff%3B%0A%20%20margin%3A%201em%20auto%3B%0A%20%20max%2Dwidth%3A%20700px%3B%0A%20%20overflow%3A%20visible%3B%0A%20%20padding%2Dleft%3A%202em%3B%0A%20%20padding%2Dright%3A%202em%3B%0A%20%20font%2Dfamily%3A%20%22Open%20Sans%22%2C%20%22Helvetica%20Neue%22%2C%20Helvetica%2C%20Arial%2C%20sans%2Dserif%3B%0A%20%20font%2Dsize%3A%2014px%3B%0A%20%20line%2Dheight%3A%201%2E35%3B%0A%7D%0A%0A%23header%20%7B%0A%20%20text%2Dalign%3A% [...]
+
+</head>
+
+<body>
+
+
+
+
+<h1 class="title toc-ignore">Tidy evaluation</h1>
+
+
+
+<p>Tidy evaluation is is a general toolkit for non-standard evaluation, principally used to create domain-specific languages of grammars. The most prominent examples of such sublanguages in R are modelling specifications with formulas (<code>lm()</code>, <code>lme4::lmer()</code>, etc) and data manipulation grammars (dplyr, tidyr). Most of these DSLs put dataframe columns in scope so that users can refer to them directly, saving keystrokes during interactive analysis and creating easily  [...]
+<p>R makes it easy to create DSLs thanks to three features of the language:</p>
+<ul>
+<li><p>R code is first-class. That is, R code can be manipulated like any other object (see <code>sym()</code>, <code>lang()</code>, and <code>node()</code>). We use the term <strong>expression</strong> (see <code>is_expr()</code>) to refer to objects that are created by parsing R code.</p></li>
+<li><p>Scope is first-class. Scope is the lexical environment that associates values to symbols in expressions. Unlike like most languages, environments can be created (see <code>env()</code>) and manipulated as regular objects.</p></li>
+<li><p>Finally, functions can capture the expressions that were supplied as arguments instead of being passed the value of these expressions (see <code>enquo()</code> and <code>enexpr()</code>).</p></li>
+</ul>
+<p>R functions can capture expressions, manipulate them like regular objects, and alter the meaning of symbols referenced in these expressions by changing the scope (the environment) in which they are evaluated. This combination of features allow R packages to change the meaning of R code and create domain-specific sublanguages.</p>
+<p>Tidy evaluation is an opinionated way to use these features to create consistent DSLs. The main principle is that sublanguages should feel and behave like R code. They change the meaning of R code, but only in a precise and circumscribed way, behaving otherwise predictably and in accordance with R semantics. As a result, users are be able to leverage their existing knowledge of R programming to solve problems involving the sublanguage in ways that were not necessarily envisioned or pl [...]
+<div id="parsing-versus-evaluation" class="section level2">
+<h2>Parsing versus evaluation</h2>
+<p>There are two ways of dealing with unevaluated expressions to create a sublanguage. The first is to parse the expression and modify it, and the other is to leave the expression as is and evaluate it in a modified environment.</p>
+<p>Let’s take the example of designing a modelling DSL to illustrate parsing. You would need to traverse the call and analyse all functions encountered in the expression (in particular, operators like <code>+</code> or <code>:</code>), building a data structure describing a model as you go. This method of dealing with expressions is complex, rigid, and error prone because you’re basically writing an interpreter for R code. It is extremely difficult to emulate R semantics when parsing an  [...]
+<p>The second way is to rely on evaluation in a specific environment. The expression is evaluated in an environment where certain objects and functions are given special definitions. For instance <code>+</code> might be defined as accumulating vectors in a data structure to build a design matrix later on, or we might put helper functions in scope (an example is <code>dplyr::select()</code>). As this method is relying on the R interpreter, the grammar is much more likely to behave like re [...]
+<p>R DSLs are traditionally implemented with a mix of both principles. Expressions are parsed in ad hoc ways, but are eventually evaluated in an environment containing dataframe columns. While it is difficult to completely avoid ad hoc parsing, tidyeval DSLs strive to rely on evaluation as much as possible.</p>
+</div>
+<div id="values-versus-expressions" class="section level2">
+<h2>Values versus expressions</h2>
+<p>A corollary of emphasising evaluation is that your DSL functions should understand <em>values</em> in addition to expressions. This is especially important with [quasiquotation][quasiquotation]: users can bypass symbolic evaluation completely by unquoting values. For instance, the following expressions are completely equivalent:</p>
+<pre class="sourceCode r"><code class="sourceCode r"><span class="co"># Taking an expression:</span>
+dplyr::<span class="kw">mutate</span>(mtcars, <span class="dt">cyl2 =</span> cyl *<span class="st"> </span><span class="dv">2</span>)
+
+<span class="co"># Taking a value:</span>
+var <-<span class="st"> </span>mtcars$cyl *<span class="st"> </span><span class="dv">2</span>
+dplyr::<span class="kw">mutate</span>(mtcars, <span class="dt">cyl2 =</span> !!<span class="st"> </span>var)</code></pre>
+<p><code>dplyr::mutate()</code> evaluates expressions in a context where dataframe columns are in scope, but it accepts any value that can be treated as a column (a recycled scalar or a vector as long as there are rows).</p>
+<p>A more complex example is <code>dplyr::select()</code>. This function evaluates dataframe columns in a context where they represent column positions. Therefore, <code>select()</code> understands column symbols like <code>cyl</code>:</p>
+<pre class="sourceCode r"><code class="sourceCode r"><span class="co"># Taking a symbol:</span>
+dplyr::<span class="kw">select</span>(mtcars, cyl)
+
+<span class="co"># Taking an unquoted symbol:</span>
+var <-<span class="st"> </span><span class="kw">quote</span>(sym)
+dplyr::<span class="kw">select</span>(mtcars, !!<span class="st"> </span>var)</code></pre>
+<p>But it also understands column positions:</p>
+<pre class="sourceCode r"><code class="sourceCode r"><span class="co"># Taking a column position:</span>
+dplyr::<span class="kw">select</span>(mtcars, <span class="dv">2</span>)
+
+<span class="co"># Taking an unquoted column position:</span>
+var <-<span class="st"> </span><span class="dv">2</span>
+dplyr::<span class="kw">select</span>(mtcars, !!<span class="st"> </span>var)</code></pre>
+<p>Understanding values in addition to expressions makes your grammar more consistent, predictable, and programmable.</p>
+</div>
+<div id="tidy-scoping" class="section level2">
+<h2>Tidy scoping</h2>
+<p>The special type of scoping found in R grammars implemented with evaluation poses some challenges. Both objects from a dataset and objects from the current environment should be in scope, with the former having precedence over the latter. In other words, the dataset should <strong>overscope</strong> the dynamic context. The traditional solution to this issue in R is to transform a dataframe to an environment and set the calling frame as the parent environment. This way, the symbols ap [...]
+<p>Creating this scope hierarchy (data first, context next) is possible because R makes it easy to capture the calling environment (see [caller_env()]). However, this supposes that captured expressions were actually typed in the most immediate caller frame. This assumption easily breaks in R. First because quasiquotation allows an user to combine expressions that do not necessarily come from the same lexical context. Secondly because arguments can be forwarded through the special <code>. [...]
+<p>In that context, maintaining scoping consistency is a challenge because we’re dealing with multiple environments, one for each argument plus one containing the overscoped data. This creates difficulties regarding tidyeval’s overarching principle that we should change R semantics through evaluation. It is possible to evaluate each expression in turn, but how can we combine all expressions into one and evaluate it tidily at once? An expression can only be evaluated in a single environme [...]
+</div>
+<div id="quosures-and-overscoping" class="section level2">
+<h2>Quosures and overscoping</h2>
+<p>Unlike formulas, quosures aren’t simple containers of an expression and an environment. In the tidyeval framework, they have the property of self-evaluating in their own environment. Hence they can appear anywhere in an expression (e.g. by being <a href="http://rlang.tidyverse.org/reference/quasiquotation.html">unquoted</a>), carrying their own environment and behaving otherwise exactly like surrounding R code. Quosures behave like reified <a href="http://adv-r.had.co.nz/Computing-on- [...]
+<p>However, the dynamic environments of quosures do not contain overscoped data. It’s not of much use for sublanguages to get the contextual environment right if they can’t also change the meaning of code quoted in quosures. To solve this issue, tidyeval rechains the overscope to a quosure just before it self-evaluates. This way, both the lexical environment and the overscoped data are in scope when the quosure is evaluated. It is evaluated tidily.</p>
+<p>In practical terms, <code>eval_tidy()</code> takes a <code>data</code> argument and creates an overscope suitable for tidy evaluation. In particular, these overscopes contain definitions for self-evaluation of quosures. See [eval_tidy_()] and [as_overscope] for more flexible ways of creating overscopes.</p>
+</div>
+<div id="theory" class="section level2">
+<h2>Theory</h2>
+<p>The most important concept of the tidy evaluation framework is that expressions should be scoped in their dynamic context. This issue is linked to the computer science concept of <em>hygiene</em>, which roughly means that symbols should be scoped in their local context, the context where they are typed by the user. In a way, hygiene is what “tidy” refers to in “tidy evaluation”.</p>
+<p>In languages with macros, hygiene comes up for <a href="https://en.wikipedia.org/wiki/Hygienic_macro">macro expansion</a>. While macros look like R’s non-standard evaluation functions, and share certain concepts with them (in particular, they get their arguments as unevaluated code), they are actually quite different. Macros are compile-time and therefore can only operate on code and constants, never on user data. They also don’t return a value but are expanded in place by the compile [...]
+<p>Symbolic hygiene is a problem for macros during expansion because expanded code might invisibly redefine surrounding symbols. Correspondingly, hygiene is an issue for NSE functions if the code they captured gets evaluated in the wrong environment. Historically, fexprs did not have this problem because they existed in languages with dynamic scoping. However in modern languages with lexical scoping, it is imperative to bundle quoted expressions with their dynamic environment. The most n [...]
+<p>While formulas were introduced in the S language, the quosure was invented much later for R <a href="https://github.com/wch/r-source/commit/a945ac8e6a82617205442d44a2be3a497d2ac896">by Luke Tierney in 2000</a>. From that point on formulas recorded their environment along with the model terms. In the Lisp world, the Kernel Lisp language also recognised that arguments should be captured together with their dynamic environment in order to solve hygienic evaluation in the context of lexic [...]
+<p>Tidyeval contributes to the problem of hygienic evaluation in four ways:</p>
+<ul>
+<li><p>Promoting the quosure as the proper quotation data structure, in order to keep track of the dynamic environment of quoted expressions.</p></li>
+<li><p>Introducing systematic quasiquotation in all capturing functions in order to make it straightforward to program with these functions.</p></li>
+<li><p>Treating quosures as reified promises that self-evaluate within their own environments. This allows unquoting quosures within other quosures, which is the key for programming hygienically with capturing functions.</p></li>
+<li><p>Building a moving overscope that rechains to quosures as they get evaluated. This makes it possible to change the evaluation context and at the same time take the lexical context of each quosure into account.</p></li>
+</ul>
+</div>
+
+
+
+<!-- dynamically load mathjax for compatibility with self-contained -->
+<script>
+  (function () {
+    var script = document.createElement("script");
+    script.type = "text/javascript";
+    script.src  = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML";
+    document.getElementsByTagName("head")[0].appendChild(script);
+  })();
+</script>
+
+</body>
+</html>
diff --git a/man/abort.Rd b/man/abort.Rd
new file mode 100644
index 0000000..9941295
--- /dev/null
+++ b/man/abort.Rd
@@ -0,0 +1,50 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd.R
+\name{abort}
+\alias{abort}
+\alias{warn}
+\alias{inform}
+\title{Signal an error, warning, or message}
+\usage{
+abort(msg, type = NULL, call = FALSE)
+
+warn(msg, type = NULL, call = FALSE)
+
+inform(msg, type = NULL, call = FALSE)
+}
+\arguments{
+\item{msg}{A message to display.}
+
+\item{type}{Subclass of the condition to signal.}
+
+\item{call}{Whether to display the call.}
+}
+\description{
+These functions are equivalent to base functions \code{\link[base:stop]{base::stop()}},
+\code{\link[base:warning]{base::warning()}} and \code{\link[base:message]{base::message()}}, but the \code{type} argument
+makes it easy to create subclassed conditions. They also don't
+include call information by default. This saves you from typing
+\code{call. = FALSE} to make error messages cleaner within package
+functions.
+}
+\details{
+Like \code{stop()} and \code{\link[=cnd_abort]{cnd_abort()}}, \code{abort()} signals a critical
+condition and interrupts execution by jumping to top level (see
+\code{\link[=rst_abort]{rst_abort()}}). Only a handler of the relevant type can prevent
+this jump by making another jump to a different target on the stack
+(see \code{\link[=with_handlers]{with_handlers()}}).
+
+\code{warn()} and \code{inform()} both have the side effect of displaying a
+message. These messages will not be displayed if a handler
+transfers control. Transfer can be achieved by establishing an
+exiting handler that transfers control to \code{\link[=with_handlers]{with_handlers()}}). In
+this case, the current function stops and execution resumes at the
+point where handlers were established.
+
+Since it is often desirable to continue normally after a message or
+warning, both \code{warn()} and \code{inform()} (and their base R equivalent)
+establish a muffle restart where handlers can jump to prevent the
+message from being displayed. Execution resumes normally after
+that. See \code{\link[=rst_muffle]{rst_muffle()}} to jump to a muffling restart, and the
+\code{muffle} argument of \code{\link[=inplace]{inplace()}} for creating a muffling handler.
+}
diff --git a/man/are_na.Rd b/man/are_na.Rd
new file mode 100644
index 0000000..9de7776
--- /dev/null
+++ b/man/are_na.Rd
@@ -0,0 +1,58 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-missing.R
+\name{are_na}
+\alias{are_na}
+\alias{is_na}
+\alias{is_lgl_na}
+\alias{is_int_na}
+\alias{is_dbl_na}
+\alias{is_chr_na}
+\alias{is_cpl_na}
+\title{Test for missing values}
+\usage{
+are_na(x)
+
+is_na(x)
+
+is_lgl_na(x)
+
+is_int_na(x)
+
+is_dbl_na(x)
+
+is_chr_na(x)
+
+is_cpl_na(x)
+}
+\arguments{
+\item{x}{An object to test}
+}
+\description{
+\code{are_na()} checks for missing values in a vector and is equivalent
+to \code{\link[base:is.na]{base::is.na()}}. It is a vectorised predicate, meaning that its
+output is always the same length as its input. On the other hand,
+\code{is_na()} is a scalar predicate and always returns a scalar
+boolean, \code{TRUE} or \code{FALSE}. If its input is not scalar, it returns
+\code{FALSE}. Finally, there are typed versions that check for
+particular \link[=missing]{missing types}.
+}
+\details{
+The scalar predicates accept non-vector inputs. They are equivalent
+to \code{\link[=is_null]{is_null()}} in that respect. In contrast the vectorised
+predicate \code{are_na()} requires a vector input since it is defined
+over vector values.
+}
+\examples{
+# are_na() is vectorised and works regardless of the type
+are_na(c(1, 2, NA))
+are_na(c(1L, NA, 3L))
+
+# is_na() checks for scalar input and works for all types
+is_na(NA)
+is_na(na_dbl)
+is_na(character(0))
+
+# There are typed versions as well:
+is_lgl_na(NA)
+is_lgl_na(na_dbl)
+}
diff --git a/man/arg_match.Rd b/man/arg_match.Rd
new file mode 100644
index 0000000..8463a19
--- /dev/null
+++ b/man/arg_match.Rd
@@ -0,0 +1,34 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/arg.R
+\name{arg_match}
+\alias{arg_match}
+\title{Match an argument to a character vector}
+\usage{
+arg_match(arg, values = NULL)
+}
+\arguments{
+\item{arg}{A symbol referring to an argument accepting strings.}
+
+\item{values}{The possible values that \code{arg} can take. If \code{NULL},
+the values are taken from the function definition of the \link[=caller_frame]{caller
+frame}.}
+}
+\value{
+The string supplied to \code{arg}.
+}
+\description{
+This is equivalent to \code{\link[base:match.arg]{base::match.arg()}} with a few differences:
+\itemize{
+\item Partial matches trigger an error.
+\item Error messages are a bit more informative and obey the tidyverse
+standards.
+}
+}
+\examples{
+fn <- function(x = c("foo", "bar")) arg_match(x)
+fn("bar")
+
+# This would throw an informative error if run:
+# fn("b")
+# fn("baz")
+}
diff --git a/man/as_bytes.Rd b/man/as_bytes.Rd
new file mode 100644
index 0000000..1baac3d
--- /dev/null
+++ b/man/as_bytes.Rd
@@ -0,0 +1,18 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-raw.R
+\name{as_bytes}
+\alias{as_bytes}
+\title{Coerce to a raw vector}
+\usage{
+as_bytes(x)
+}
+\arguments{
+\item{x}{A string.}
+}
+\value{
+A raw vector of bytes.
+}
+\description{
+This currently only works with strings, and returns its hexadecimal
+representation.
+}
diff --git a/man/as_env.Rd b/man/as_env.Rd
new file mode 100644
index 0000000..1ff9d63
--- /dev/null
+++ b/man/as_env.Rd
@@ -0,0 +1,42 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{as_env}
+\alias{as_env}
+\title{Coerce to an environment}
+\usage{
+as_env(x, parent = NULL)
+}
+\arguments{
+\item{x}{An object to coerce.}
+
+\item{parent}{A parent environment, \code{\link[=empty_env]{empty_env()}} by default. This
+argument is only used when \code{x} is data actually coerced to an
+environment (as opposed to data representing an environment, like
+\code{NULL} representing the empty environment).}
+}
+\description{
+\code{as_env()} coerces named vectors (including lists) to an
+environment. It first checks that \code{x} is a dictionary (see
+\code{\link[=is_dictionaryish]{is_dictionaryish()}}). If supplied an unnamed string, it returns the
+corresponding package environment (see \code{\link[=pkg_env]{pkg_env()}}).
+}
+\details{
+If \code{x} is an environment and \code{parent} is not \code{NULL}, the
+environment is duplicated before being set a new parent. The return
+value is therefore a different environment than \code{x}.
+}
+\examples{
+# Coerce a named vector to an environment:
+env <- as_env(mtcars)
+
+# By default it gets the empty environment as parent:
+identical(env_parent(env), empty_env())
+
+
+# With strings it is a handy shortcut for pkg_env():
+as_env("base")
+as_env("rlang")
+
+# With NULL it returns the empty environment:
+as_env(NULL)
+}
diff --git a/man/as_function.Rd b/man/as_function.Rd
new file mode 100644
index 0000000..ec96f49
--- /dev/null
+++ b/man/as_function.Rd
@@ -0,0 +1,46 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fn.R
+\name{as_function}
+\alias{as_function}
+\alias{as_closure}
+\title{Convert to function or closure}
+\usage{
+as_function(x, env = caller_env())
+
+as_closure(x, env = caller_env())
+}
+\arguments{
+\item{x}{A function or formula.
+
+If a \strong{function}, it is used as is.
+
+If a \strong{formula}, e.g. \code{~ .x + 2}, it is converted to a function
+with two arguments, \code{.x} or \code{.} and \code{.y}. This allows you to
+create very compact anonymous functions with up to two inputs.}
+
+\item{env}{Environment in which to fetch the function in case \code{x}
+is a string.}
+}
+\description{
+\itemize{
+\item \code{as_function()} transform objects to functions. It fetches
+functions by name if supplied a string or transforms
+\link[=quosure]{quosures} to a proper function.
+\item \code{as_closure()} first passes its argument to \code{as_function()}. If
+the result is a primitive function, it regularises it to a proper
+\link{closure} (see \code{\link[=is_function]{is_function()}} about primitive functions).
+}
+}
+\examples{
+f <- as_function(~ . + 1)
+f(10)
+
+# Primitive functions are regularised as closures
+as_closure(list)
+as_closure("list")
+
+# Operators have `.x` and `.y` as arguments, just like lambda
+# functions created with the formula syntax:
+as_closure(`+`)
+as_closure(`~`)
+}
diff --git a/man/as_overscope.Rd b/man/as_overscope.Rd
new file mode 100644
index 0000000..e2f7fd7
--- /dev/null
+++ b/man/as_overscope.Rd
@@ -0,0 +1,121 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval-tidy.R
+\name{as_overscope}
+\alias{as_overscope}
+\alias{new_overscope}
+\alias{overscope_eval_next}
+\alias{overscope_clean}
+\title{Create a dynamic scope for tidy evaluation}
+\usage{
+as_overscope(quo, data = NULL)
+
+new_overscope(bottom, top = NULL, enclosure = base_env())
+
+overscope_eval_next(overscope, quo, env = base_env())
+
+overscope_clean(overscope)
+}
+\arguments{
+\item{quo}{A \link{quosure}.}
+
+\item{data}{Additional data to put in scope.}
+
+\item{bottom}{This is the environment (or the bottom of a set of
+environments) containing definitions for overscoped symbols. The
+bottom environment typically contains pronouns (like \code{.data})
+while its direct parents contain the overscoping bindings. The
+last one of these parents is the \code{top}.}
+
+\item{top}{The top environment of the overscope. During tidy
+evaluation, this environment is chained and rechained to lexical
+enclosures of self-evaluating formulas (or quosures). This is the
+mechanism that ensures hygienic scoping: the bindings in the
+overscope have precedence, but the bindings in the dynamic
+environment where the tidy quotes were created in the first place
+are in scope as well.}
+
+\item{enclosure}{The default enclosure. After a quosure is done
+self-evaluating, the overscope is rechained to the default
+enclosure.}
+
+\item{overscope}{A valid overscope containing bindings for \code{~},
+\code{.top_env} and \code{_F} and whose parents contain overscoped bindings
+for tidy evaluation.}
+
+\item{env}{The lexical enclosure in case \code{quo} is not a validly
+scoped quosure. This is the \link[=base_env]{base environment} by
+default.}
+}
+\value{
+An overscope environment.
+
+A valid overscope: a child environment of \code{bottom}
+containing the definitions enabling tidy evaluation
+(self-evaluating quosures, formula-unguarding, ...).
+}
+\description{
+Tidy evaluation works by rescoping a set of symbols (column names
+of a data frame for example) to custom bindings. While doing this,
+it is important to keep the original environment of captured
+expressions in scope. The gist of tidy evaluation is to create a
+dynamic scope containing custom bindings that should have
+precedence when expressions are evaluated, and chain this scope
+(set of linked environments) to the lexical enclosure of formulas
+under evaluation. During tidy evaluation, formulas are transformed
+into formula-promises and will self-evaluate their RHS as soon as
+they are called. The main trick of tidyeval is to consistently
+rechain the dynamic scope to the lexical enclosure of each tidy
+quote under evaluation.
+}
+\details{
+These functions are useful for embedding the tidy evaluation
+framework in your own DSLs with your own evaluating function. They
+let you create a custom dynamic scope. That is, a set of chained
+environments whose bottom serves as evaluation environment and
+whose top is rechained to the current lexical enclosure. But most
+of the time, you can just use \code{\link[=eval_tidy_]{eval_tidy_()}} as it will take
+care of installing the tidyeval components in your custom dynamic
+scope.
+\itemize{
+\item \code{as_overscope()} is the function that powers \code{\link[=eval_tidy]{eval_tidy()}}. It
+could be useful if you cannot use \code{eval_tidy()} for some reason,
+but serves mostly as an example of how to build a dynamic scope
+for tidy evaluation. In this case, it creates pronouns \code{.data}
+and \code{.env} and buries all dynamic bindings from the supplied
+\code{data} in new environments.
+\item \code{new_overscope()} is called by \code{as_overscope()} and
+\code{\link[=eval_tidy_]{eval_tidy_()}}. It installs the definitions for making
+formulas self-evaluate and for formula-guards. It also installs
+the pronoun \code{.top_env} that helps keeping track of the boundary
+of the dynamic scope. If you evaluate a tidy quote with
+\code{\link[=eval_tidy_]{eval_tidy_()}}, you don't need to use this.
+\item \code{eval_tidy_()} is useful when you have several quosures to
+evaluate in a same dynamic scope. That's a simple wrapper around
+\code{\link[=eval_bare]{eval_bare()}} that updates the \code{.env} pronoun and rechains the
+dynamic scope to the new formula enclosure to evaluate.
+\item Once an expression has been evaluated in the tidy environment,
+it's a good idea to clean up the definitions that make
+self-evaluation of formulas possible \code{overscope_clean()}.
+Otherwise your users may face unexpected results in specific
+corner cases (e.g. when the evaluation environment is leaked, see
+examples). Note that this function is automatically called by
+\code{\link[=eval_tidy_]{eval_tidy_()}}.
+}
+}
+\examples{
+# Evaluating in a tidy evaluation environment enables all tidy
+# features:
+expr <- quote(list(.data$cyl, ~letters))
+f <- as_quosure(expr)
+overscope <- as_overscope(f, data = mtcars)
+overscope_eval_next(overscope, f)
+
+# However you need to cleanup the environment after evaluation.
+# Otherwise the leftover definitions for self-evaluation of
+# formulas might cause unexpected results:
+fn <- overscope_eval_next(overscope, ~function() ~letters)
+fn()
+
+overscope_clean(overscope)
+fn()
+}
diff --git a/man/as_pairlist.Rd b/man/as_pairlist.Rd
new file mode 100644
index 0000000..27dd8c3
--- /dev/null
+++ b/man/as_pairlist.Rd
@@ -0,0 +1,18 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-node.R
+\name{as_pairlist}
+\alias{as_pairlist}
+\title{Coerce to pairlist}
+\usage{
+as_pairlist(x)
+}
+\arguments{
+\item{x}{An object to coerce.}
+}
+\description{
+This transforms vector objects to a linked pairlist of nodes. See
+\link{pairlist} for information about the pairlist type.
+}
+\seealso{
+\link{pairlist}
+}
diff --git a/man/as_quosure.Rd b/man/as_quosure.Rd
new file mode 100644
index 0000000..5e6c68a
--- /dev/null
+++ b/man/as_quosure.Rd
@@ -0,0 +1,64 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo.R
+\name{as_quosure}
+\alias{as_quosure}
+\alias{as_quosureish}
+\title{Coerce object to quosure}
+\usage{
+as_quosure(x, env = caller_env())
+
+as_quosureish(x, env = caller_env())
+}
+\arguments{
+\item{x}{An object to convert.}
+
+\item{env}{An environment specifying the lexical enclosure of the
+quosure.}
+}
+\description{
+Quosure objects wrap an \link[=is_expr]{expression} with a \link[=env]{lexical
+enclosure}. This is a powerful quoting (see \code{\link[base:quote]{base::quote()}}
+and \code{\link[=quo]{quo()}}) mechanism that makes it possible to carry and
+manipulate expressions while making sure that its symbolic content
+(symbols and named calls, see \code{\link[=is_symbolic]{is_symbolic()}}) is correctly looked
+up during evaluation.
+\itemize{
+\item \code{new_quosure()} creates a quosure from a raw expression and an
+environment.
+\item \code{as_quosure()} is useful for functions that expect quosures but
+allow specifying a raw expression as well. It has two possible
+effects: if \code{x} is not a quosure, it wraps it into a quosure
+bundling \code{env} as scope. If \code{x} is an unscoped quosure (see
+\code{\link[=is_quosure]{is_quosure()}}), \code{env} is used as a default scope. On the other
+hand if \code{x} has a valid enclosure, it is returned as is (even if
+\code{env} is not the same as the formula environment).
+\item While \code{as_quosure()} always returns a quosure (a one-sided
+formula), even when its input is a \link[=new_formula]{formula} or a
+\link[=op-definition]{definition}, \code{as_quosureish()} returns quosureish
+inputs as is.
+}
+}
+\examples{
+# Sometimes you get unscoped formulas because of quotation:
+f <- ~~expr
+inner_f <- f_rhs(f)
+str(inner_f)
+is_quosureish(inner_f, scoped = TRUE)
+
+# You can use as_quosure() to provide a default environment:
+as_quosure(inner_f, base_env())
+
+# Or convert expressions or any R object to a validly scoped quosure:
+as_quosure(quote(expr), base_env())
+as_quosure(10L, base_env())
+
+
+# While as_quosure() always returns a quosure (one-sided formula),
+# as_quosureish() returns quosureish objects:
+as_quosure(a := b)
+as_quosureish(a := b)
+as_quosureish(10L)
+}
+\seealso{
+\code{\link[=is_quosure]{is_quosure()}}
+}
diff --git a/man/as_utf8_character.Rd b/man/as_utf8_character.Rd
new file mode 100644
index 0000000..7ef3d86
--- /dev/null
+++ b/man/as_utf8_character.Rd
@@ -0,0 +1,55 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-chr.R
+\name{as_utf8_character}
+\alias{as_utf8_character}
+\alias{as_native_character}
+\alias{as_utf8_string}
+\alias{as_native_string}
+\title{Coerce to a character vector and attempt encoding conversion}
+\usage{
+as_utf8_character(x)
+
+as_native_character(x)
+
+as_utf8_string(x)
+
+as_native_string(x)
+}
+\arguments{
+\item{x}{An object to coerce.}
+}
+\description{
+Unlike specifying the \code{encoding} argument in \code{as_string()} and
+\code{as_character()}, which is only declarative, these functions
+actually attempt to convert the encoding of their input. There are
+two possible cases:
+\itemize{
+\item The string is tagged as UTF-8 or latin1, the only two encodings
+for which R has specific support. In this case, converting to the
+same encoding is a no-op, and converting to native always works
+as expected, as long as the native encoding, the one specified by
+the \code{LC_CTYPE} locale (see \code{\link[=mut_utf8_locale]{mut_utf8_locale()}}) has support for
+all characters occurring in the strings. Unrepresentable
+characters are serialised as unicode points: "<U+xxxx>".
+\item The string is not tagged. R assumes that it is encoded in the
+native encoding. Conversion to native is a no-op, and conversion
+to UTF-8 should work as long as the string is actually encoded in
+the locale codeset.
+}
+}
+\examples{
+# Let's create a string marked as UTF-8 (which is guaranteed by the
+# Unicode escaping in the string):
+utf8 <- "caf\\uE9"
+str_encoding(utf8)
+as_bytes(utf8)
+
+# It can then be converted to a native encoding, that is, the
+# encoding specified in the current locale:
+\dontrun{
+mut_latin1_locale()
+latin1 <- as_native_string(utf8)
+str_encoding(latin1)
+as_bytes(latin1)
+}
+}
diff --git a/man/bare-type-predicates.Rd b/man/bare-type-predicates.Rd
new file mode 100644
index 0000000..8c53f0e
--- /dev/null
+++ b/man/bare-type-predicates.Rd
@@ -0,0 +1,65 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{bare-type-predicates}
+\alias{bare-type-predicates}
+\alias{is_bare_list}
+\alias{is_bare_atomic}
+\alias{is_bare_vector}
+\alias{is_bare_double}
+\alias{is_bare_integer}
+\alias{is_bare_numeric}
+\alias{is_bare_character}
+\alias{is_bare_logical}
+\alias{is_bare_raw}
+\alias{is_bare_string}
+\alias{is_bare_bytes}
+\title{Bare type predicates}
+\usage{
+is_bare_list(x, n = NULL)
+
+is_bare_atomic(x, n = NULL)
+
+is_bare_vector(x, n = NULL)
+
+is_bare_double(x, n = NULL)
+
+is_bare_integer(x, n = NULL)
+
+is_bare_numeric(x, n = NULL)
+
+is_bare_character(x, n = NULL, encoding = NULL)
+
+is_bare_logical(x, n = NULL)
+
+is_bare_raw(x, n = NULL)
+
+is_bare_string(x, n = NULL)
+
+is_bare_bytes(x, n = NULL)
+}
+\arguments{
+\item{x}{Object to be tested.}
+
+\item{n}{Expected length of a vector.}
+
+\item{encoding}{Expected encoding of a string or character
+vector. One of \code{UTF-8}, \code{latin1}, or \code{unknown}.}
+}
+\description{
+These predicates check for a given type but only return \code{TRUE} for
+bare R objects. Bare objects have no class attributes. For example,
+a data frame is a list, but not a bare list.
+}
+\details{
+\itemize{
+\item The predicates for vectors include the \code{n} argument for
+pattern-matching on the vector length.
+\item Like \code{\link[=is_atomic]{is_atomic()}} and unlike base R \code{is.atomic()},
+\code{is_bare_atomic()} does not return \code{TRUE} for \code{NULL}.
+\item Unlike base R \code{is.numeric()}, \code{is_bare_double()} only returns
+\code{TRUE} for floating point numbers.
+}
+}
+\seealso{
+\link{type-predicates}, \link{scalar-type-predicates}
+}
diff --git a/man/call_inspect.Rd b/man/call_inspect.Rd
new file mode 100644
index 0000000..06bfeb0
--- /dev/null
+++ b/man/call_inspect.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{call_inspect}
+\alias{call_inspect}
+\title{Inspect a call}
+\usage{
+call_inspect(...)
+}
+\arguments{
+\item{...}{Arguments to display in the returned call.}
+}
+\description{
+This function is useful for quick testing and debugging when you
+manipulate expressions and calls. It lets you check that a function
+is called with the right arguments. This can be useful in unit
+tests for instance. Note that this is just a simple wrapper around
+\code{\link[base:match.call]{base::match.call()}}.
+}
+\examples{
+call_inspect(foo(bar), "" \%>\% identity())
+invoke(call_inspect, list(a = mtcars, b = letters))
+}
diff --git a/man/caller_env.Rd b/man/caller_env.Rd
new file mode 100644
index 0000000..ce9078a
--- /dev/null
+++ b/man/caller_env.Rd
@@ -0,0 +1,27 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{caller_env}
+\alias{caller_env}
+\alias{caller_frame}
+\alias{caller_fn}
+\title{Get the environment of the caller frame}
+\usage{
+caller_env(n = 1)
+
+caller_frame(n = 1)
+
+caller_fn(n = 1)
+}
+\arguments{
+\item{n}{The number of generation to go back. Note that contrarily
+to \code{\link[=call_frame]{call_frame()}}, 1 represents the parent frame rather than the
+current frame.}
+}
+\description{
+\code{caller_frame()} is a shortcut for \code{call_frame(2)} and
+\code{caller_fn()} and \code{caller_env()} are shortcuts for
+\code{call_frame(2)$env} \code{call_frame(2)$fn}.
+}
+\seealso{
+\code{\link[=call_frame]{call_frame()}}
+}
diff --git a/man/cnd_signal.Rd b/man/cnd_signal.Rd
new file mode 100644
index 0000000..6509405
--- /dev/null
+++ b/man/cnd_signal.Rd
@@ -0,0 +1,134 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd.R
+\name{cnd_signal}
+\alias{cnd_signal}
+\alias{cnd_abort}
+\title{Signal a condition}
+\usage{
+cnd_signal(.cnd, ..., .msg = NULL, .call = FALSE, .mufflable = TRUE)
+
+cnd_abort(.cnd, ..., .msg = NULL, .call = FALSE, .mufflable = FALSE)
+}
+\arguments{
+\item{.cnd}{Either a condition object (see \code{\link[=new_cnd]{new_cnd()}}), or the
+name of a s3 class from which a new condition will be created.}
+
+\item{...}{Named data fields stored inside the condition
+object. These dots are evaluated with \link[=dots_list]{explicit
+splicing}.}
+
+\item{.msg}{A string to override the condition's default message.}
+
+\item{.call}{Whether to display the call of the frame in which the
+condition is signalled. If \code{TRUE}, the call is stored in the
+\code{call} field of the condition object: this field is displayed by
+R when an error is issued. The call information is also stored in
+the \code{.call} field in all cases.}
+
+\item{.mufflable}{Whether to signal the condition with a muffling
+restart. This is useful to let \code{\link[=inplace]{inplace()}} handlers muffle a
+condition. It stops the condition from being passed to other
+handlers when the inplace handler did not jump elsewhere. \code{TRUE}
+by default for benign conditions, but \code{FALSE} for critical ones,
+since in those cases execution should probably not be allowed to
+continue normally.}
+}
+\description{
+Signal a condition to handlers that have been established on the
+stack. Conditions signalled with \code{cnd_signal()} are assumed to be
+benign. Control flow can resume normally once the conditions has
+been signalled (if no handler jumped somewhere else on the
+evaluation stack). On the other hand, \code{cnd_abort()} treats the
+condition as critical and will jump out of the distressed call
+frame (see \code{\link[=rst_abort]{rst_abort()}}), unless a handler can deal with the
+condition.
+}
+\details{
+If \code{.critical} is \code{FALSE}, this function has no side effects beyond
+calling handlers. In particular, execution will continue normally
+after signalling the condition (unless a handler jumped somewhere
+else via \code{\link[=rst_jump]{rst_jump()}} or by being \code{\link[=exiting]{exiting()}}). If \code{.critical} is
+\code{TRUE}, the condition is signalled via \code{\link[base:stop]{base::stop()}} and the
+program will terminate if no handler dealt with the condition by
+jumping out of the distressed call frame.
+
+\code{\link[=inplace]{inplace()}} handlers are called in turn when they decline to handle
+the condition by returning normally. However, it is sometimes
+useful for an inplace handler to produce a side effect (signalling
+another condition, displaying a message, logging something, etc),
+prevent the condition from being passed to other handlers, and
+resume execution from the place where the condition was
+signalled. The easiest way to accomplish this is by jumping to a
+restart point (see \code{\link[=with_restarts]{with_restarts()}}) established by the signalling
+function. If \code{.mufflable} is \code{TRUE}, a muffle restart is
+established. This allows inplace handler to muffle a signalled
+condition. See \code{\link[=rst_muffle]{rst_muffle()}} to jump to a muffling restart, and
+the \code{muffle} argument of \code{\link[=inplace]{inplace()}} for creating a muffling
+handler.
+}
+\examples{
+# Creating a condition of type "foo"
+cnd <- new_cnd("foo")
+
+# If no handler capable of dealing with "foo" is established on the
+# stack, signalling the condition has no effect:
+cnd_signal(cnd)
+
+# To learn more about establishing condition handlers, see
+# documentation for with_handlers(), exiting() and inplace():
+with_handlers(cnd_signal(cnd),
+  foo = inplace(function(c) cat("side effect!\\n"))
+)
+
+
+# By default, cnd_signal() creates a muffling restart which allows
+# inplace handlers to prevent a condition from being passed on to
+# other handlers and to resume execution:
+undesirable_handler <- inplace(function(c) cat("please don't call me\\n"))
+muffling_handler <- inplace(function(c) {
+  cat("muffling foo...\\n")
+  rst_muffle(c)
+})
+
+with_handlers(foo = undesirable_handler,
+  with_handlers(foo = muffling_handler, {
+    cnd_signal("foo")
+    "return value"
+  }))
+
+
+# You can signal a critical condition with cnd_abort(). Unlike
+# cnd_signal() which has no side effect besides signalling the
+# condition, cnd_abort() makes the program terminate with an error
+# unless a handler can deal with the condition:
+\dontrun{
+cnd_abort(cnd)
+}
+
+# If you don't specify a .msg or .call, the default message/call
+# (supplied to new_cnd()) are displayed. Otherwise, the ones
+# supplied to cnd_abort() and cnd_signal() take precedence:
+\dontrun{
+critical <- new_cnd("my_error",
+  .msg = "default 'my_error' msg",
+  .call = quote(default(call))
+)
+cnd_abort(critical)
+cnd_abort(critical, .msg = "overridden msg")
+
+fn <- function(...) {
+  cnd_abort(critical, .call = TRUE)
+}
+fn(arg = foo(bar))
+}
+
+# Note that by default a condition signalled with cnd_abort() does
+# not have a muffle restart. That is because in most cases,
+# execution should not continue after signalling a critical
+# condition.
+}
+\seealso{
+\code{\link[=abort]{abort()}}, \code{\link[=warn]{warn()}} and \code{\link[=inform]{inform()}} for signalling typical
+R conditions. See \code{\link[=with_handlers]{with_handlers()}} for establishing condition
+handlers.
+}
diff --git a/man/dictionary.Rd b/man/dictionary.Rd
new file mode 100644
index 0000000..227016b
--- /dev/null
+++ b/man/dictionary.Rd
@@ -0,0 +1,40 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/dictionary.R
+\name{dictionary}
+\alias{dictionary}
+\alias{as_dictionary}
+\alias{is_dictionary}
+\title{Create a dictionary}
+\usage{
+as_dictionary(x, lookup_msg = NULL, read_only = FALSE)
+
+is_dictionary(x)
+}
+\arguments{
+\item{x}{An object for which you want to find associated data.}
+
+\item{lookup_msg}{An error message when your data source is
+accessed inappropriately (by position rather than name).}
+
+\item{read_only}{Whether users can replace elements of the
+dictionary.}
+}
+\description{
+Dictionaries are a concept of types modelled after R
+environments. Dictionaries are containers of R objects that:
+\itemize{
+\item Contain uniquely named objects.
+\item Can only be indexed by name. They must implement the extracting
+operators \code{$} and \code{[[}. The latter returns an error when indexed
+by position because dictionaries are not vectors (they are
+unordered).
+\item Report a clear error message when asked to extract a name that
+does not exist. This error message can be customised with the
+\code{lookup_msg} constructor argument.
+}
+}
+\details{
+Dictionaries are used within the tidy evaluation framework for
+creating pronouns that can be explicitly referred to from captured
+code. See \code{\link[=eval_tidy]{eval_tidy()}}.
+}
diff --git a/man/dots_list.Rd b/man/dots_list.Rd
new file mode 100644
index 0000000..629feed
--- /dev/null
+++ b/man/dots_list.Rd
@@ -0,0 +1,67 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/dots.R
+\name{dots_list}
+\alias{dots_list}
+\alias{dots_splice}
+\title{Extract dots with splicing semantics}
+\usage{
+dots_list(..., .ignore_empty = c("trailing", "none", "all"))
+
+dots_splice(..., .ignore_empty = c("trailing", "none", "all"))
+}
+\arguments{
+\item{...}{Arguments with explicit (\code{dots_list()}) or list
+(\code{dots_splice()}) splicing semantics. The contents of spliced
+arguments are embedded in the returned list.}
+
+\item{.ignore_empty}{Whether to ignore empty arguments. Can be one
+of \code{"trailing"}, \code{"none"}, \code{"all"}. If \code{"trailing"}, only the
+last argument is ignored if it is empty.}
+}
+\value{
+A list of arguments. This list is always named: unnamed
+arguments are named with the empty string \code{""}.
+}
+\description{
+These functions evaluate all arguments contained in \code{...} and
+return them as a list. They both splice their arguments if they
+qualify for splicing. See \code{\link[=ll]{ll()}} for information about splicing
+and below for the kind of arguments that qualify for splicing.
+}
+\details{
+\code{dots_list()} has \emph{explicit splicing semantics}: it splices lists
+that are explicitly marked for \link[=ll]{splicing} with the
+\code{\link[=splice]{splice()}} adjective. \code{dots_splice()} on the other hand has \emph{list
+splicing semantics}: in addition to lists marked explicitly for
+splicing, \link[=is_bare_list]{bare} lists are spliced as well.
+}
+\examples{
+# Compared to simply using list(...) to capture dots, dots_list()
+# splices explicitly:
+x <- list(1, 2)
+dots_list(!!! x, 3)
+
+# Unlike dots_splice(), it doesn't splice bare lists:
+dots_list(x, 3)
+
+# Splicing is also helpful to workaround exact and partial matching
+# of arguments. Let's create a function taking named arguments and
+# dots:
+fn <- function(data, ...) {
+  dots_list(...)
+}
+
+# You normally cannot pass an argument named `data` through the dots
+# as it will match `fn`'s `data` argument. The splicing syntax
+# provides a workaround:
+fn(some_data, !!! list(data = letters))
+
+# dots_splice() splices lists marked with splice() as well as bare
+# lists:
+x <- list(1, 2)
+dots_splice(!!! x, 3)
+dots_splice(x, 3)
+}
+\seealso{
+\code{\link[=exprs]{exprs()}} for extracting dots without evaluation.
+}
diff --git a/man/dots_n.Rd b/man/dots_n.Rd
new file mode 100644
index 0000000..c89dfe1
--- /dev/null
+++ b/man/dots_n.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/dots.R
+\name{dots_n}
+\alias{dots_n}
+\title{How many arguments are currently forwarded in dots?}
+\usage{
+dots_n(...)
+}
+\arguments{
+\item{...}{Forwarded arguments.}
+}
+\description{
+This returns the number of arguments currently forwarded in \code{...}
+as an integer.
+}
+\examples{
+fn <- function(...) dots_n(..., baz)
+fn(foo, bar)
+}
diff --git a/man/dots_values.Rd b/man/dots_values.Rd
new file mode 100644
index 0000000..dec7728
--- /dev/null
+++ b/man/dots_values.Rd
@@ -0,0 +1,30 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/dots.R
+\name{dots_values}
+\alias{dots_values}
+\title{Evaluate dots with preliminary splicing}
+\usage{
+dots_values(..., .ignore_empty = c("trailing", "none", "all"))
+}
+\arguments{
+\item{...}{Arguments to evaluate and process splicing operators.}
+
+\item{.ignore_empty}{Whether to ignore empty arguments. Can be one
+of \code{"trailing"}, \code{"none"}, \code{"all"}. If \code{"trailing"}, only the
+last argument is ignored if it is empty.}
+}
+\description{
+This is a tool for advanced users. It captures dots, processes
+unquoting and splicing operators, and evaluates them. Unlike
+\code{\link[=dots_list]{dots_list()}} and \code{\link[=dots_splice]{dots_splice()}}, it does not flatten spliced
+objects. They are merely attributed a \code{spliced} class (see
+\code{\link[=splice]{splice()}}). You can process spliced objects manually, perhaps with
+a custom predicate (see \code{\link[=flatten_if]{flatten_if()}}).
+}
+\examples{
+dots <- dots_values(!!! list(1))
+dots
+
+# Flatten the spliced objects:
+flatten_if(dots, is_spliced)
+}
diff --git a/man/duplicate.Rd b/man/duplicate.Rd
new file mode 100644
index 0000000..6c51af5
--- /dev/null
+++ b/man/duplicate.Rd
@@ -0,0 +1,34 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-node.R
+\name{duplicate}
+\alias{duplicate}
+\title{Duplicate an R object}
+\usage{
+duplicate(x, shallow = FALSE)
+}
+\arguments{
+\item{x}{Any R object. However, uncopyable types like symbols and
+environments are returned as is (just like with \code{<-}).}
+
+\item{shallow}{This is relevant for recursive data structures like
+lists, calls and pairlists. A shallow copy only duplicates the
+top-level data structure. The objects contained in the list are
+still the same.}
+}
+\description{
+In R semantics, objects are copied by value. This means that
+modifying the copy leaves the original object intact. Since,
+copying data in memory is an expensive operation, copies in R are
+as lazy as possible. They only happen when the new object is
+actually modified. However, some operations (like \code{\link[=mut_node_car]{mut_node_car()}}
+or \code{\link[=mut_node_cdr]{mut_node_cdr()}}) do not support copy-on-write. In those cases,
+it is necessary to duplicate the object manually in order to
+preserve copy-by-value semantics.
+}
+\details{
+Some objects are not duplicable, like symbols and environments.
+\code{duplicate()} returns its input for these unique objects.
+}
+\seealso{
+pairlist
+}
diff --git a/man/empty_env.Rd b/man/empty_env.Rd
new file mode 100644
index 0000000..66e1c1a
--- /dev/null
+++ b/man/empty_env.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{empty_env}
+\alias{empty_env}
+\title{Get the empty environment}
+\usage{
+empty_env()
+}
+\description{
+The empty environment is the only one that does not have a parent.
+It is always used as the tail of a scope chain such as the search
+path (see \code{\link[=scoped_names]{scoped_names()}}).
+}
+\examples{
+# Create environments with nothing in scope:
+child_env(empty_env())
+}
diff --git a/man/env.Rd b/man/env.Rd
new file mode 100644
index 0000000..6a2fa08
--- /dev/null
+++ b/man/env.Rd
@@ -0,0 +1,147 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env}
+\alias{env}
+\alias{child_env}
+\alias{new_environment}
+\title{Create a new environment}
+\usage{
+env(...)
+
+child_env(.parent, ...)
+
+new_environment(data = list())
+}
+\arguments{
+\item{..., data}{Named values. The dots have \link[=dots_list]{explicit splicing
+semantics}.}
+
+\item{.parent}{A parent environment. Can be an object supported by
+\code{\link[=as_env]{as_env()}}.}
+}
+\description{
+These functions create new environments.
+\itemize{
+\item \code{env()} always creates a child of the current environment.
+\item \code{child_env()} lets you specify a parent (see section on
+inheritance).
+\item \code{new_environment()} creates a child of the empty environment. It
+is useful e.g. for using environments as containers of data
+rather than as part of a scope hierarchy.
+}
+}
+\section{Environments as objects}{
+
+
+Environments are containers of uniquely named objects. Their most
+common use is to provide a scope for the evaluation of R
+expressions. Not all languages have first class environments,
+i.e. can manipulate scope as regular objects. Reification of scope
+is one of the most powerful feature of R as it allows you to change
+what objects a function or expression sees when it is evaluated.
+
+Environments also constitute a data structure in their own
+right. They are \link[=dictionary]{dictionaries} of uniquely named
+objects, subsettable by name and modifiable by reference. This
+latter property (see section on reference semantics) is especially
+useful for creating mutable OO systems (cf the \href{https://github.com/wch/R6}{R6package} and the \href{http://ggplot2.tidyverse.org/articles/extending-ggplot2.html}{ggprotosystem}
+for extending ggplot2).
+}
+
+\section{Inheritance}{
+
+
+All R environments (except the \link[=empty_env]{empty environment}) are
+defined with a parent environment. An environment and its
+grandparents thus form a linear hierarchy that is the basis for
+\href{https://en.wikipedia.org/wiki/Scope_(computer_science)}{lexicalscoping} in
+R. When R evaluates an expression, it looks up symbols in a given
+environment. If it cannot find these symbols there, it keeps
+looking them up in parent environments. This way, objects defined
+in child environments have precedence over objects defined in
+parent environments.
+
+The ability of overriding specific definitions is used in the
+tidyeval framework to create powerful domain-specific grammars. A
+common use of overscoping is to put data frame columns in
+scope. See \code{\link[=as_overscope]{as_overscope()}} for technical details.
+}
+
+\section{Reference semantics}{
+
+
+Unlike regular objects such as vectors, environments are an
+\link[=is_copyable]{uncopyable} object type. This means that if you
+have multiple references to a given environment (by assigning the
+environment to another symbol with \code{<-} or passing the environment
+as argument to a function), modifying the bindings of one of those
+references changes all other references as well.
+}
+
+\examples{
+# env() creates a new environment which has the current environment
+# as parent
+env <- env(a = 1, b = "foo")
+env$b
+identical(env_parent(env), get_env())
+
+
+# child_env() lets you specify a parent:
+child <- child_env(env, c = "bar")
+identical(env_parent(child), env)
+
+# This child environment owns `c` but inherits `a` and `b` from `env`:
+env_has(child, c("a", "b", "c", "d"))
+env_has(child, c("a", "b", "c", "d"), inherit = TRUE)
+
+# `parent` is passed to as_env() to provide handy shortcuts. Pass a
+# string to create a child of a package environment:
+child_env("rlang")
+env_parent(child_env("rlang"))
+
+# Or `NULL` to create a child of the empty environment:
+child_env(NULL)
+env_parent(child_env(NULL))
+
+# The base package environment is often a good default choice for a
+# parent environment because it contains all standard base
+# functions. Also note that it will never inherit from other loaded
+# package environments since R keeps the base package at the tail
+# of the search path:
+base_child <- child_env("base")
+env_has(base_child, c("lapply", "("), inherit = TRUE)
+
+# On the other hand, a child of the empty environment doesn't even
+# see a definition for `(`
+empty_child <- child_env(NULL)
+env_has(empty_child, c("lapply", "("), inherit = TRUE)
+
+# Note that all other package environments inherit from base_env()
+# as well:
+rlang_child <- child_env("rlang")
+env_has(rlang_child, "env", inherit = TRUE)     # rlang function
+env_has(rlang_child, "lapply", inherit = TRUE)  # base function
+
+
+# Both env() and child_env() take dots with explicit splicing:
+objs <- list(b = "foo", c = "bar")
+env <- env(a = 1, !!! objs)
+env$c
+
+# You can also unquote names with the definition operator `:=`
+var <- "a"
+env <- env(!!var := "A")
+env$a
+
+
+# Use new_environment() to create containers with the empty
+# environment as parent:
+env <- new_environment()
+env_parent(env)
+
+# Like other new_ constructors, it takes an object rather than dots:
+new_environment(list(a = "foo", b = "bar"))
+}
+\seealso{
+\code{scoped_env}, \code{\link[=env_has]{env_has()}}, \code{\link[=env_bind]{env_bind()}}.
+}
diff --git a/man/env_bind.Rd b/man/env_bind.Rd
new file mode 100644
index 0000000..db73311
--- /dev/null
+++ b/man/env_bind.Rd
@@ -0,0 +1,156 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_bind}
+\alias{env_bind}
+\alias{env_bind_exprs}
+\alias{env_bind_fns}
+\title{Bind symbols to objects in an environment}
+\usage{
+env_bind(.env, ...)
+
+env_bind_exprs(.env, ..., .eval_env = caller_env())
+
+env_bind_fns(.env, ...)
+}
+\arguments{
+\item{.env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}. This argument
+is passed to \code{\link[=get_env]{get_env()}}.}
+
+\item{...}{Pairs of names and expressions, values or
+functions. These dots support splicing (with varying semantics,
+see above) and name unquoting.}
+
+\item{.eval_env}{The environment where the expressions will be
+evaluated when the symbols are forced.}
+}
+\value{
+The input object \code{.env}, with its associated environment
+modified in place, invisibly.
+}
+\description{
+These functions create bindings in an environment. The bindings are
+supplied through \code{...} as pairs of names and values or expressions.
+\code{env_bind()} is equivalent to evaluating a \code{<-} expression within
+the given environment. This function should take care of the
+majority of use cases but the other variants can be useful for
+specific problems.
+\itemize{
+\item \code{env_bind()} takes named \emph{values}. The arguments are evaluated
+once (with \link[=dots_list]{explicit splicing}) and bound in \code{.env}.
+\code{env_bind()} is equivalent to \code{\link[base:assign]{base::assign()}}.
+\item \code{env_bind_fns()} takes named \emph{functions} and creates active
+bindings in \code{.env}. This is equivalent to
+\code{\link[base:makeActiveBinding]{base::makeActiveBinding()}}. An active binding executes a
+function each time it is evaluated. \code{env_bind_fns()} takes dots
+with \link[=dots_splice]{implicit splicing}, so that you can supply
+both named functions and named lists of functions.
+
+If these functions are \link[=is_closure]{closures} they are lexically
+scoped in the environment that they bundle. These functions can
+thus refer to symbols from this enclosure that are not actually
+in scope in the dynamic environment where the active bindings are
+invoked. This allows creative solutions to difficult problems
+(see the implementations of \code{dplyr::do()} methods for an
+example).
+\item \code{env_bind_exprs()} takes named \emph{expressions}. This is equivalent
+to \code{\link[base:delayedAssign]{base::delayedAssign()}}. The arguments are captured with
+\code{\link[=exprs]{exprs()}} (and thus support call-splicing and unquoting) and
+assigned to symbols in \code{.env}. These expressions are not
+evaluated immediately but lazily. Once a symbol is evaluated, the
+corresponding expression is evaluated in turn and its value is
+bound to the symbol (the expressions are thus evaluated only
+once, if at all).
+}
+}
+\section{Side effects}{
+
+
+Since environments have reference semantics (see relevant section
+in \code{\link[=env]{env()}} documentation), modifying the bindings of an environment
+produces effects in all other references to that environment. In
+other words, \code{env_bind()} and its variants have side effects.
+
+As they are called primarily for their side effects, these
+functions follow the convention of returning their input invisibly.
+}
+
+\examples{
+# env_bind() is a programmatic way of assigning values to symbols
+# with `<-`. We can add bindings in the current environment:
+env_bind(get_env(), foo = "bar")
+foo
+
+# Or modify those bindings:
+bar <- "bar"
+env_bind(get_env(), bar = "BAR")
+bar
+
+# It is most useful to change other environments:
+my_env <- env()
+env_bind(my_env, foo = "foo")
+my_env$foo
+
+# A useful feature is to splice lists of named values:
+vals <- list(a = 10, b = 20)
+env_bind(my_env, !!! vals, c = 30)
+my_env$b
+my_env$c
+
+# You can also unquote a variable referring to a symbol or a string
+# as binding name:
+var <- "baz"
+env_bind(my_env, !!var := "BAZ")
+my_env$baz
+
+
+# env_bind() and its variants are generic over formulas, quosures
+# and closures. To illustrate this, let's create a closure function
+# referring to undefined bindings:
+fn <- function() list(a, b)
+fn <- set_env(fn, child_env("base"))
+
+# This would fail if run since `a` etc are not defined in the
+# enclosure of fn() (a child of the base environment):
+# fn()
+
+# Let's define those symbols:
+env_bind(fn, a = "a", b = "b")
+
+# fn() now sees the objects:
+fn()
+
+# env_bind_exprs() assigns expressions lazily:
+env <- env()
+env_bind_exprs(env, name = cat("forced!\\n"))
+env$name
+env$name
+
+# You can unquote expressions. Note that quosures are not
+# supported, only raw expressions:
+expr <- quote(message("forced!"))
+env_bind_exprs(env, name = !! expr)
+env$name
+
+# You can create active bindings with env_bind_fns()
+# Let's create some bindings in the lexical enclosure of `fn`:
+counter <- 0
+
+# And now a function that increments the counter and returns a
+# string with the count:
+fn <- function() {
+  counter <<- counter + 1
+  paste("my counter:", counter)
+}
+
+# Now we create an active binding in a child of the current
+# environment:
+env <- env()
+env_bind_fns(env, symbol = fn)
+
+# `fn` is executed each time `symbol` is evaluated or retrieved:
+env$symbol
+env$symbol
+eval_bare(quote(symbol), env)
+eval_bare(quote(symbol), env)
+}
diff --git a/man/env_bury.Rd b/man/env_bury.Rd
new file mode 100644
index 0000000..5f8cde2
--- /dev/null
+++ b/man/env_bury.Rd
@@ -0,0 +1,46 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_bury}
+\alias{env_bury}
+\title{Overscope bindings by defining symbols deeper in a scope}
+\usage{
+env_bury(.env, ...)
+}
+\arguments{
+\item{.env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}. This argument
+is passed to \code{\link[=get_env]{get_env()}}.}
+
+\item{...}{Pairs of names and expressions, values or
+functions. These dots support splicing (with varying semantics,
+see above) and name unquoting.}
+}
+\value{
+A copy of \code{.env} enclosing the new environment containing
+bindings to \code{...} arguments.
+}
+\description{
+\code{env_bury()} is like \code{\link[=env_bind]{env_bind()}} but it creates the bindings in a
+new child environment. This makes sure the new bindings have
+precedence over old ones, without altering existing environments.
+Unlike \code{env_bind()}, this function does not have side effects and
+returns a new environment (or object wrapping that environment).
+}
+\examples{
+orig_env <- env(a = 10)
+fn <- set_env(function() a, orig_env)
+
+# fn() currently sees `a` as the value `10`:
+fn()
+
+# env_bury() will bury the current scope of fn() behind a new
+# environment:
+fn <- env_bury(fn, a = 1000)
+fn()
+
+# Even though the symbol `a` is still defined deeper in the scope:
+orig_env$a
+}
+\seealso{
+\code{\link[=env_bind]{env_bind()}}, \code{\link[=env_unbind]{env_unbind()}}
+}
diff --git a/man/env_clone.Rd b/man/env_clone.Rd
new file mode 100644
index 0000000..7d8feb5
--- /dev/null
+++ b/man/env_clone.Rd
@@ -0,0 +1,24 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_clone}
+\alias{env_clone}
+\title{Clone an environment}
+\usage{
+env_clone(env, parent = env_parent(env))
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{parent}{The parent of the cloned environment.}
+}
+\description{
+This creates a new environment containing exactly the same objects,
+optionally with a new parent.
+}
+\examples{
+env <- env(!!! mtcars)
+clone <- env_clone(env)
+identical(env, clone)
+identical(env$cyl, clone$cyl)
+}
diff --git a/man/env_depth.Rd b/man/env_depth.Rd
new file mode 100644
index 0000000..c800d6c
--- /dev/null
+++ b/man/env_depth.Rd
@@ -0,0 +1,28 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_depth}
+\alias{env_depth}
+\title{Depth of an environment chain}
+\usage{
+env_depth(env)
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+}
+\value{
+An integer.
+}
+\description{
+This function returns the number of environments between \code{env} and
+the \link[=empty_env]{empty environment}, including \code{env}. The depth of
+\code{env} is also the number of parents of \code{env} (since the empty
+environment counts as a parent).
+}
+\examples{
+env_depth(empty_env())
+env_depth(pkg_env("rlang"))
+}
+\seealso{
+The section on inheritance in \code{\link[=env]{env()}} documentation.
+}
diff --git a/man/env_get.Rd b/man/env_get.Rd
new file mode 100644
index 0000000..97235ea
--- /dev/null
+++ b/man/env_get.Rd
@@ -0,0 +1,34 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_get}
+\alias{env_get}
+\title{Get an object from an environment}
+\usage{
+env_get(env = caller_env(), nm, inherit = FALSE)
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{nm}{The name of a binding.}
+
+\item{inherit}{Whether to look for bindings in the parent
+environments.}
+}
+\value{
+An object if it exists. Otherwise, throws an error.
+}
+\description{
+\code{env_get()} extracts an object from an enviroment \code{env}. By
+default, it does not look in the parent environments.
+}
+\examples{
+parent <- child_env(NULL, foo = "foo")
+env <- child_env(parent, bar = "bar")
+
+# This throws an error because `foo` is not directly defined in env:
+# env_get(env, "foo")
+
+# However `foo` can be fetched in the parent environment:
+env_get(env, "foo", inherit = TRUE)
+}
diff --git a/man/env_has.Rd b/man/env_has.Rd
new file mode 100644
index 0000000..a1d2293
--- /dev/null
+++ b/man/env_has.Rd
@@ -0,0 +1,35 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_has}
+\alias{env_has}
+\title{Does an environment have or see bindings?}
+\usage{
+env_has(env = caller_env(), nms, inherit = FALSE)
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{nms}{A character vector containing the names of the bindings
+to remove.}
+
+\item{inherit}{Whether to look for bindings in the parent
+environments.}
+}
+\value{
+A logical vector as long as \code{nms}.
+}
+\description{
+\code{env_has()} is a vectorised predicate that queries whether an
+environment owns bindings personally (with \code{inherit} set to
+\code{FALSE}, the default), or sees them in its own environment or in
+any of its parents (with \code{inherit = TRUE}).
+}
+\examples{
+parent <- child_env(NULL, foo = "foo")
+env <- child_env(parent, bar = "bar")
+
+# env does not own `foo` but sees it in its parent environment:
+env_has(env, "foo")
+env_has(env, "foo", inherit = TRUE)
+}
diff --git a/man/env_inherits.Rd b/man/env_inherits.Rd
new file mode 100644
index 0000000..b72a980
--- /dev/null
+++ b/man/env_inherits.Rd
@@ -0,0 +1,17 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_inherits}
+\alias{env_inherits}
+\title{Does environment inherit from another environment?}
+\usage{
+env_inherits(env, ancestor)
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{ancestor}{Another environment from which \code{x} might inherit.}
+}
+\description{
+This returns \code{TRUE} if \code{x} has \code{ancestor} among its parents.
+}
diff --git a/man/env_names.Rd b/man/env_names.Rd
new file mode 100644
index 0000000..440d63f
--- /dev/null
+++ b/man/env_names.Rd
@@ -0,0 +1,52 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_names}
+\alias{env_names}
+\title{Names of symbols bound in an environment}
+\usage{
+env_names(env)
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+}
+\value{
+A character vector of object names.
+}
+\description{
+\code{env_names()} returns object names from an enviroment \code{env} as a
+character vector. All names are returned, even those starting with
+a dot.
+}
+\section{Names of symbols and objects}{
+
+
+Technically, objects are bound to symbols rather than strings,
+since the R interpreter evaluates symbols (see \code{\link[=is_expr]{is_expr()}} for a
+discussion of symbolic objects versus literal objects). However it
+is often more convenient to work with strings. In rlang
+terminology, the string corresponding to a symbol is called the
+\emph{name} of the symbol (or by extension the name of an object bound
+to a symbol).
+}
+
+\section{Encoding}{
+
+
+There are deep encoding issues when you convert a string to symbol
+and vice versa. Symbols are \emph{always} in the native encoding (see
+\code{\link[=set_chr_encoding]{set_chr_encoding()}}). If that encoding (let's say latin1) cannot
+support some characters, these characters are serialised to
+ASCII. That's why you sometimes see strings looking like
+\code{<U+1234>}, especially if you're running Windows (as R doesn't
+support UTF-8 as native encoding on that platform).
+
+To alleviate some of the encoding pain, \code{env_names()} always
+returns a UTF-8 character vector (which is fine even on Windows)
+with unicode points unserialised.
+}
+
+\examples{
+env <- env(a = 1, b = 2)
+env_names(env)
+}
diff --git a/man/env_parent.Rd b/man/env_parent.Rd
new file mode 100644
index 0000000..c710a2f
--- /dev/null
+++ b/man/env_parent.Rd
@@ -0,0 +1,56 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_parent}
+\alias{env_parent}
+\alias{env_tail}
+\alias{env_parents}
+\title{Get parent environments}
+\usage{
+env_parent(env = caller_env(), n = 1)
+
+env_tail(env = caller_env())
+
+env_parents(env = caller_env())
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{n}{The number of generations to go up.}
+}
+\value{
+An environment for \code{env_parent()} and \code{env_tail()}, a list
+of environments for \code{env_parents()}.
+}
+\description{
+\itemize{
+\item \code{env_parent()} returns the parent environment of \code{env} if called
+with \code{n = 1}, the grandparent with \code{n = 2}, etc.
+\item \code{env_tail()} searches through the parents and returns the one
+which has \code{\link[=empty_env]{empty_env()}} as parent.
+\item \code{env_parents()} returns the list of all parents, including the
+empty environment.
+}
+
+See the section on \emph{inheritance} in \code{\link[=env]{env()}}'s documentation.
+}
+\examples{
+# Get the parent environment with env_parent():
+env_parent(global_env())
+
+# Or the tail environment with env_tail():
+env_tail(global_env())
+
+# By default, env_parent() returns the parent environment of the
+# current evaluation frame. If called at top-level (the global
+# frame), the following two expressions are equivalent:
+env_parent()
+env_parent(base_env())
+
+# This default is more handy when called within a function. In this
+# case, the enclosure environment of the function is returned
+# (since it is the parent of the evaluation frame):
+enclos_env <- env()
+fn <- set_env(function() env_parent(), enclos_env)
+identical(enclos_env, fn())
+}
diff --git a/man/env_unbind.Rd b/man/env_unbind.Rd
new file mode 100644
index 0000000..3d704e0
--- /dev/null
+++ b/man/env_unbind.Rd
@@ -0,0 +1,43 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{env_unbind}
+\alias{env_unbind}
+\title{Remove bindings from an environment}
+\usage{
+env_unbind(env = caller_env(), nms, inherit = FALSE)
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{nms}{A character vector containing the names of the bindings
+to remove.}
+
+\item{inherit}{Whether to look for bindings in the parent
+environments.}
+}
+\value{
+The input object \code{env} with its associated environment
+modified in place, invisibly.
+}
+\description{
+\code{env_unbind()} is the complement of \code{\link[=env_bind]{env_bind()}}. Like \code{env_has()},
+it ignores the parent environments of \code{env} by default. Set
+\code{inherit} to \code{TRUE} to track down bindings in parent environments.
+}
+\examples{
+data <- set_names(as_list(letters), letters)
+env_bind(environment(), !!! data)
+env_has(environment(), letters)
+
+# env_unbind() removes bindings:
+env_unbind(environment(), letters)
+env_has(environment(), letters)
+
+# With inherit = TRUE, it removes bindings in parent environments
+# as well:
+parent <- child_env(NULL, foo = "a")
+env <- child_env(parent, foo = "b")
+env_unbind(env, "foo", inherit = TRUE)
+env_has(env, "foo", inherit = TRUE)
+}
diff --git a/man/eval_bare.Rd b/man/eval_bare.Rd
new file mode 100644
index 0000000..186b3eb
--- /dev/null
+++ b/man/eval_bare.Rd
@@ -0,0 +1,77 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval.R
+\name{eval_bare}
+\alias{eval_bare}
+\title{Evaluate an expression in an environment}
+\usage{
+eval_bare(expr, env = parent.frame())
+}
+\arguments{
+\item{expr}{An expression to evaluate.}
+
+\item{env}{The environment in which to evaluate the expression.}
+}
+\description{
+\code{eval_bare()} is a lightweight version of the base function
+\code{\link[base:eval]{base::eval()}}. It does not accept supplementary data, but it is
+more efficient and does not clutter the evaluation stack.
+Technically, \code{eval_bare()} is a simple wrapper around the C
+function \code{Rf_eval()}.
+}
+\details{
+\code{base::eval()} inserts two call frames in the stack, the second of
+which features the \code{envir} parameter as frame environment. This may
+unnecessarily clutter the evaluation stack and it can change
+evaluation semantics with stack sensitive functions in the case
+where \code{env} is an evaluation environment of a stack frame (see
+\code{\link[=ctxt_stack]{ctxt_stack()}}). Since the base function \code{eval()} creates a new
+evaluation context with \code{env} as frame environment there are
+actually two contexts with the same evaluation environment on the
+stack when \code{expr} is evaluated. Thus, any command that looks up
+frames on the stack (stack sensitive functions) may find the
+parasite frame set up by \code{eval()} rather than the original frame
+targetted by \code{env}. As a result, code evaluated with \code{base::eval()}
+does not have the property of stack consistency, and stack
+sensitive functions like \code{\link[base:return]{base::return()}}, \code{\link[base:parent.frame]{base::parent.frame()}}
+may return misleading results.
+}
+\examples{
+# eval_bare() works just like base::eval():
+env <- child_env(NULL, foo = "bar")
+expr <- quote(foo)
+eval_bare(expr, env)
+
+# To explore the consequences of stack inconsistent semantics, let's
+# create a function that evaluates `parent.frame()` deep in the call
+# stack, in an environment corresponding to a frame in the middle of
+# the stack. For consistency we R's lazy evaluation semantics, we'd
+# expect to get the caller of that frame as result:
+fn <- function(eval_fn) {
+  list(
+    returned_env = middle(eval_fn),
+    actual_env = get_env()
+  )
+}
+middle <- function(eval_fn) {
+  deep(eval_fn, get_env())
+}
+deep <- function(eval_fn, eval_env) {
+  expr <- quote(parent.frame())
+  eval_fn(expr, eval_env)
+}
+
+# With eval_bare(), we do get the expected environment:
+fn(rlang::eval_bare)
+
+# But that's not the case with base::eval():
+fn(base::eval)
+
+# Another difference of eval_bare() compared to base::eval() is
+# that it does not insert parasite frames in the evaluation stack:
+get_stack <- quote(identity(ctxt_stack()))
+eval_bare(get_stack)
+eval(get_stack)
+}
+\seealso{
+with_env
+}
diff --git a/man/eval_tidy.Rd b/man/eval_tidy.Rd
new file mode 100644
index 0000000..625ea56
--- /dev/null
+++ b/man/eval_tidy.Rd
@@ -0,0 +1,99 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval-tidy.R
+\name{eval_tidy}
+\alias{eval_tidy}
+\title{Evaluate an expression tidily}
+\usage{
+eval_tidy(expr, data = NULL, env = caller_env())
+}
+\arguments{
+\item{expr}{An expression.}
+
+\item{data}{A list (or data frame). This is passed to the
+\code{\link[=as_dictionary]{as_dictionary()}} coercer, a generic used to transform an object
+to a proper data source. If you want to make \code{eval_tidy()} work
+for your own objects, you can define a method for this generic.}
+
+\item{env}{The lexical environment in which to evaluate \code{expr}.}
+}
+\description{
+\code{eval_tidy()} is a variant of \code{\link[base:eval]{base::eval()}} and \code{\link[=eval_bare]{eval_bare()}} that
+powers the \href{http://rlang.tidyverse.org/articles/tidy-evaluation.html}{tidy evaluationframework}.
+It evaluates \code{expr} in an \link[=as_overscope]{overscope} where the
+special definitions enabling tidy evaluation are installed. This
+enables the following features:
+\itemize{
+\item Overscoped data. You can supply a data frame or list of named
+vectors to the \code{data} argument. The data contained in this list
+has precedence over the objects in the contextual environment.
+This is similar to how \code{\link[base:eval]{base::eval()}} accepts a list instead of
+an environment.
+\item Self-evaluation of quosures. Within the overscope, quosures act
+like promises. When a quosure within an expression is evaluated,
+it automatically invokes the quoted expression in the captured
+environment (chained to the overscope). Note that quosures do not
+always get evaluated because of lazy semantics, e.g. \code{TRUE || ~never_called}.
+\item Pronouns. \code{eval_tidy()} installs the \code{.env} and \code{.data}
+pronouns. \code{.env} contains a reference to the calling environment,
+while \code{.data} refers to the \code{data} argument. These pronouns lets
+you be explicit about where to find values and throw errors if
+you try to access non-existent values.
+}
+}
+\examples{
+# Like base::eval() and eval_bare(), eval_tidy() evaluates quoted
+# expressions:
+expr <- expr(1 + 2 + 3)
+eval_tidy(expr)
+
+# Like base::eval(), it lets you supply overscoping data:
+foo <- 1
+bar <- 2
+expr <- quote(list(foo, bar))
+eval_tidy(expr, list(foo = 100))
+
+# The main difference is that quosures self-evaluate within
+# eval_tidy():
+quo <- quo(1 + 2 + 3)
+eval(quo)
+eval_tidy(quo)
+
+# Quosures also self-evaluate deep in an expression not just when
+# directly supplied to eval_tidy():
+expr <- expr(list(list(list(!! quo))))
+eval(expr)
+eval_tidy(expr)
+
+# Self-evaluation of quosures is powerful because they
+# automatically capture their enclosing environment:
+foo <- function(x) {
+  y <- 10
+  quo(x + y)
+}
+f <- foo(1)
+
+# This quosure refers to `x` and `y` from `foo()`'s evaluation
+# frame. That's evaluated consistently by eval_tidy():
+f
+eval_tidy(f)
+
+
+# Finally, eval_tidy() installs handy pronouns that allows users to
+# be explicit about where to find symbols. If you supply data,
+# eval_tidy() will look there first:
+cyl <- 10
+eval_tidy(quo(cyl), mtcars)
+
+# To avoid ambiguity and be explicit, you can use the `.env` and
+# `.data` pronouns:
+eval_tidy(quo(.data$cyl), mtcars)
+eval_tidy(quo(.env$cyl), mtcars)
+
+# Note that instead of using `.env` it is often equivalent to
+# unquote a value. The only difference is the timing of evaluation
+# since unquoting happens earlier (when the quosure is created):
+eval_tidy(quo(!! cyl), mtcars)
+}
+\seealso{
+\code{\link[=quo]{quo()}}, \link{quasiquotation}
+}
diff --git a/man/eval_tidy_.Rd b/man/eval_tidy_.Rd
new file mode 100644
index 0000000..9387c6d
--- /dev/null
+++ b/man/eval_tidy_.Rd
@@ -0,0 +1,43 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval-tidy.R
+\name{eval_tidy_}
+\alias{eval_tidy_}
+\title{Tidy evaluation in a custom environment}
+\usage{
+eval_tidy_(expr, bottom, top = NULL, env = caller_env())
+}
+\arguments{
+\item{expr}{An expression.}
+
+\item{bottom}{This is the environment (or the bottom of a set of
+environments) containing definitions for overscoped symbols. The
+bottom environment typically contains pronouns (like \code{.data})
+while its direct parents contain the overscoping bindings. The
+last one of these parents is the \code{top}.}
+
+\item{top}{The top environment of the overscope. During tidy
+evaluation, this environment is chained and rechained to lexical
+enclosures of self-evaluating formulas (or quosures). This is the
+mechanism that ensures hygienic scoping: the bindings in the
+overscope have precedence, but the bindings in the dynamic
+environment where the tidy quotes were created in the first place
+are in scope as well.}
+
+\item{env}{The lexical environment in which to evaluate \code{expr}.}
+}
+\description{
+We recommend using \code{\link[=eval_tidy]{eval_tidy()}} in your DSLs as much as possible
+to ensure some consistency across packages (\code{.data} and \code{.env}
+pronouns, etc). However, some DSLs might need a different
+evaluation environment. In this case, you can call \code{eval_tidy_()}
+with the bottom and the top of your custom overscope (see
+\code{\link[=as_overscope]{as_overscope()}} for more information).
+}
+\details{
+Note that \code{eval_tidy_()} always installs a \code{.env} pronoun in the
+bottom environment of your dynamic scope. This pronoun provides a
+shortcut to the original lexical enclosure (typically, the dynamic
+environment of a captured argument, see \code{\link[=enquo]{enquo()}}). It also
+cleans up the overscope after evaluation. See \code{\link[=overscope_eval_next]{overscope_eval_next()}}
+for evaluating several quosures in the same overscope.
+}
diff --git a/man/exiting.Rd b/man/exiting.Rd
new file mode 100644
index 0000000..a04c70d
--- /dev/null
+++ b/man/exiting.Rd
@@ -0,0 +1,63 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-handlers.R
+\name{exiting}
+\alias{exiting}
+\alias{inplace}
+\title{Create an exiting or in place handler}
+\usage{
+exiting(handler)
+
+inplace(handler, muffle = FALSE)
+}
+\arguments{
+\item{handler}{A handler function that takes a condition as
+argument. This is passed to \code{\link[=as_function]{as_function()}} and can thus be a
+formula describing a lambda function.}
+
+\item{muffle}{Whether to muffle the condition after executing an
+inplace handler. The signalling function must have established a
+muffling restart. Otherwise, an error will be issued.}
+}
+\description{
+There are two types of condition handlers: exiting handlers, which
+are thrown to the place where they have been established (e.g.,
+\code{\link[=with_handlers]{with_handlers()}}'s evaluation frame), and local handlers, which
+are executed in place (e.g., where the condition has been
+signalled). \code{exiting()} and \code{inplace()} create handlers suitable
+for \code{\link[=with_handlers]{with_handlers()}}.
+}
+\details{
+A subtle point in the R language is that conditions are not thrown,
+handlers are. \code{\link[base:tryCatch]{base::tryCatch()}} and \code{\link[=with_handlers]{with_handlers()}} actually
+catch handlers rather than conditions. When a critical condition
+signalled with \code{\link[base:stop]{base::stop()}} or \code{\link[=abort]{abort()}}, R inspects the handler
+stack and looks for a handler that can deal with the condition. If
+it finds an exiting handler, it throws it to the function that
+established it (\code{\link[=with_handlers]{with_handlers()}}). That is, it interrupts the
+normal course of evaluation and jumps to \code{with_handlers()}
+evaluation frame (see \code{\link[=ctxt_stack]{ctxt_stack()}}), and only then and there the
+handler is called. On the other hand, if R finds an inplace
+handler, it executes it locally. The inplace handler can choose to
+handle the condition by jumping out of the frame (see \code{\link[=rst_jump]{rst_jump()}}
+or \code{\link[=return_from]{return_from()}}). If it returns locally, it declines to handle
+the condition which is passed to the next relevant handler on the
+stack. If no handler is found or is able to deal with the critical
+condition (by jumping out of the frame), R will then jump out of
+the faulty evaluation frame to top-level, via the abort restart
+(see \code{\link[=rst_abort]{rst_abort()}}).
+}
+\examples{
+# You can supply a function taking a condition as argument:
+hnd <- exiting(function(c) cat("handled foo\\n"))
+with_handlers(cnd_signal("foo"), foo = hnd)
+
+# Or a lambda-formula where "." is bound to the condition:
+with_handlers(foo = inplace(~cat("hello", .$attr, "\\n")), {
+  cnd_signal("foo", attr = "there")
+  "foo"
+})
+}
+\seealso{
+\code{\link[=with_handlers]{with_handlers()}} for examples, \code{\link[=restarting]{restarting()}} for another
+kind of inplace handler.
+}
diff --git a/man/expr.Rd b/man/expr.Rd
new file mode 100644
index 0000000..53d11ff
--- /dev/null
+++ b/man/expr.Rd
@@ -0,0 +1,83 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr.R
+\name{expr}
+\alias{expr}
+\alias{enexpr}
+\alias{exprs}
+\title{Raw quotation of an expression}
+\usage{
+expr(expr)
+
+enexpr(arg)
+
+exprs(..., .ignore_empty = "trailing")
+}
+\arguments{
+\item{expr}{An expression.}
+
+\item{arg}{A symbol referring to an argument. The expression
+supplied to that argument will be captured unevaluated.}
+
+\item{...}{Arguments to extract.}
+
+\item{.ignore_empty}{Whether to ignore empty arguments. Can be one
+of \code{"trailing"}, \code{"none"}, \code{"all"}. If \code{"trailing"}, only the
+last argument is ignored if it is empty.}
+}
+\value{
+The raw expression supplied as argument. \code{exprs()} returns
+a list of expressions.
+}
+\description{
+These functions return raw expressions (whereas \code{\link[=quo]{quo()}} and
+variants return quosures). They support \link{quasiquotation}
+syntax.
+\itemize{
+\item \code{expr()} returns its argument unevaluated. It is equivalent to
+\code{\link[base:bquote]{base::bquote()}}.
+\item \code{enexpr()} takes an argument name and returns it unevaluated. It
+is equivalent to \code{\link[base:substitute]{base::substitute()}}.
+\item \code{exprs()} captures multiple expressions and returns a list. In
+particular, it can capture expressions in \code{...}. It supports name
+unquoting with \code{:=} (see \code{\link[=quos]{quos()}}). It is equivalent to
+\code{eval(substitute(alist(...)))}.
+}
+
+See \code{\link[=is_expr]{is_expr()}} for more about R expressions.
+}
+\examples{
+# The advantage of expr() over quote() is that it unquotes on
+# capture:
+expr(list(1, !! 3 + 10))
+
+# Unquoting can be especially useful for successive transformation
+# of a captured expression:
+(expr <- quote(foo(bar)))
+(expr <- expr(inner(!! expr, arg1)))
+(expr <- expr(outer(!! expr, !!! lapply(letters[1:3], as.symbol))))
+
+# Unlike quo(), expr() produces expressions that can
+# be evaluated with base::eval():
+e <- quote(letters)
+e <- expr(toupper(!!e))
+eval(e)
+
+# Be careful if you unquote a quosure: you need to take the RHS
+# (and lose the scope information) to evaluate with eval():
+f <- quo(letters)
+e <- expr(toupper(!! get_expr(f)))
+eval(e)
+
+# On the other hand it's fine to unquote quosures if you evaluate
+# with eval_tidy():
+f <- quo(letters)
+e <- expr(toupper(!! f))
+eval_tidy(e)
+
+# exprs() lets you unquote names with the definition operator:
+nm <- "foo"
+exprs(a = 1, !! nm := 2)
+}
+\seealso{
+\code{\link[=quo]{quo()}}, \code{\link[=is_expr]{is_expr()}}
+}
diff --git a/man/expr_interp.Rd b/man/expr_interp.Rd
new file mode 100644
index 0000000..ef7d371
--- /dev/null
+++ b/man/expr_interp.Rd
@@ -0,0 +1,53 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo-unquote.R
+\name{expr_interp}
+\alias{expr_interp}
+\title{Process unquote operators in a captured expression}
+\usage{
+expr_interp(x, env = NULL)
+}
+\arguments{
+\item{x}{A function, raw expression, or formula to interpolate.}
+
+\item{env}{The environment in which unquoted expressions should be
+evaluated. By default, the formula or closure environment if a
+formula or a function, or the current environment otherwise.}
+}
+\description{
+While all capturing functions in the tidy evaluation framework
+perform unquote on capture (most notably \code{\link[=quo]{quo()}}),
+\code{expr_interp()} manually processes unquoting operators in
+expressions that are already captured. \code{expr_interp()} should be
+called in all user-facing functions expecting a formula as argument
+to provide the same quasiquotation functionality as NSE functions.
+}
+\examples{
+# All tidy NSE functions like quo() unquote on capture:
+quo(list(!! 1 + 2))
+
+# expr_interp() is meant to provide the same functionality when you
+# have a formula or expression that might contain unquoting
+# operators:
+f <- ~list(!! 1 + 2)
+expr_interp(f)
+
+# Note that only the outer formula is unquoted (which is a reason
+# to use expr_interp() as early as possible in all user-facing
+# functions):
+f <- ~list(~!! 1 + 2, !! 1 + 2)
+expr_interp(f)
+
+
+# Another purpose for expr_interp() is to interpolate a closure's
+# body. This is useful to inline a function within another. The
+# important limitation is that all formal arguments of the inlined
+# function should be defined in the receiving function:
+other_fn <- function(x) toupper(x)
+
+fn <- expr_interp(function(x) {
+  x <- paste0(x, "_suffix")
+  !!! body(other_fn)
+})
+fn
+fn("foo")
+}
diff --git a/man/expr_label.Rd b/man/expr_label.Rd
new file mode 100644
index 0000000..0408e06
--- /dev/null
+++ b/man/expr_label.Rd
@@ -0,0 +1,47 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr.R
+\name{expr_label}
+\alias{expr_label}
+\alias{expr_name}
+\alias{expr_text}
+\title{Turn an expression to a label}
+\usage{
+expr_label(expr)
+
+expr_name(expr)
+
+expr_text(expr, width = 60L, nlines = Inf)
+}
+\arguments{
+\item{expr}{An expression to labellise.}
+
+\item{width}{Width of each line.}
+
+\item{nlines}{Maximum number of lines to extract.}
+}
+\description{
+\code{expr_text()} turns the expression into a single string, which
+might be multi-line. \code{expr_name()} is suitable for formatting
+names. It works best with symbols and scalar types, but also
+accepts calls. \code{expr_label()} formats the expression nicely for use
+in messages.
+}
+\examples{
+# To labellise a function argument, first capture it with
+# substitute():
+fn <- function(x) expr_label(substitute(x))
+fn(x:y)
+
+# Strings are encoded
+expr_label("a\\nb")
+
+# Names and expressions are quoted with ``
+expr_label(quote(x))
+expr_label(quote(a + b + c))
+
+# Long expressions are collapsed
+expr_label(quote(foo({
+  1 + 2
+  print(x)
+})))
+}
diff --git a/man/exprs_auto_name.Rd b/man/exprs_auto_name.Rd
new file mode 100644
index 0000000..e61c021
--- /dev/null
+++ b/man/exprs_auto_name.Rd
@@ -0,0 +1,29 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quos.R
+\name{exprs_auto_name}
+\alias{exprs_auto_name}
+\alias{quos_auto_name}
+\title{Ensure that list of expressions are all named}
+\usage{
+exprs_auto_name(exprs, width = 60L, printer = expr_text)
+
+quos_auto_name(quos, width = 60L)
+}
+\arguments{
+\item{exprs}{A list of expressions.}
+
+\item{width}{Maximum width of names.}
+
+\item{printer}{A function that takes an expression and converts it
+to a string. This function must take an expression as first
+argument and \code{width} as second argument.}
+
+\item{quos}{A list of quosures.}
+}
+\description{
+This gives default names to unnamed elements of a list of
+expressions (or expression wrappers such as formulas or tidy
+quotes). \code{exprs_auto_name()} deparses the expressions with
+\code{\link[=expr_text]{expr_text()}} by default. \code{quos_auto_name()} deparses with
+\code{\link[=quo_text]{quo_text()}}.
+}
diff --git a/man/f_rhs.Rd b/man/f_rhs.Rd
new file mode 100644
index 0000000..741bf1f
--- /dev/null
+++ b/man/f_rhs.Rd
@@ -0,0 +1,49 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/formula.R
+\name{f_rhs}
+\alias{f_rhs}
+\alias{f_rhs<-}
+\alias{f_lhs}
+\alias{f_lhs<-}
+\alias{f_env}
+\alias{f_env<-}
+\title{Get or set formula components}
+\usage{
+f_rhs(f)
+
+f_rhs(x) <- value
+
+f_lhs(f)
+
+f_lhs(x) <- value
+
+f_env(f)
+
+f_env(x) <- value
+}
+\arguments{
+\item{f, x}{A formula}
+
+\item{value}{The value to replace with.}
+}
+\value{
+\code{f_rhs} and \code{f_lhs} return language objects (i.e.  atomic
+vectors of length 1, a name, or a call). \code{f_env} returns an
+environment.
+}
+\description{
+\code{f_rhs} extracts the righthand side, \code{f_lhs} extracts the lefthand
+side, and \code{f_env} extracts the environment. All functions throw an
+error if \code{f} is not a formula.
+}
+\examples{
+f_rhs(~ 1 + 2 + 3)
+f_rhs(~ x)
+f_rhs(~ "A")
+f_rhs(1 ~ 2)
+
+f_lhs(~ y)
+f_lhs(x ~ y)
+
+f_env(~ x)
+}
diff --git a/man/f_text.Rd b/man/f_text.Rd
new file mode 100644
index 0000000..3c3a142
--- /dev/null
+++ b/man/f_text.Rd
@@ -0,0 +1,39 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/formula.R
+\name{f_text}
+\alias{f_text}
+\alias{f_name}
+\alias{f_label}
+\title{Turn RHS of formula into a string or label}
+\usage{
+f_text(x, width = 60L, nlines = Inf)
+
+f_name(x)
+
+f_label(x)
+}
+\arguments{
+\item{x}{A formula.}
+
+\item{width}{Width of each line.}
+
+\item{nlines}{Maximum number of lines to extract.}
+}
+\description{
+Equivalent of \code{\link[=expr_text]{expr_text()}} and \code{\link[=expr_label]{expr_label()}} for formulas.
+}
+\examples{
+f <- ~ a + b + bc
+f_text(f)
+f_label(f)
+
+# Names a quoted with ``
+f_label(~ x)
+# Strings are encoded
+f_label(~ "a\\nb")
+# Long expressions are collapsed
+f_label(~ foo({
+  1 + 2
+  print(x)
+}))
+}
diff --git a/man/flatten.Rd b/man/flatten.Rd
new file mode 100644
index 0000000..c7d8edc
--- /dev/null
+++ b/man/flatten.Rd
@@ -0,0 +1,112 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-squash.R
+\name{flatten}
+\alias{flatten}
+\alias{flatten_lgl}
+\alias{flatten_int}
+\alias{flatten_dbl}
+\alias{flatten_cpl}
+\alias{flatten_chr}
+\alias{flatten_raw}
+\alias{squash}
+\alias{squash_lgl}
+\alias{squash_int}
+\alias{squash_dbl}
+\alias{squash_cpl}
+\alias{squash_chr}
+\alias{squash_raw}
+\alias{flatten_if}
+\alias{squash_if}
+\title{Flatten or squash a list of lists into a simpler vector}
+\usage{
+flatten(x)
+
+flatten_lgl(x)
+
+flatten_int(x)
+
+flatten_dbl(x)
+
+flatten_cpl(x)
+
+flatten_chr(x)
+
+flatten_raw(x)
+
+squash(x)
+
+squash_lgl(x)
+
+squash_int(x)
+
+squash_dbl(x)
+
+squash_cpl(x)
+
+squash_chr(x)
+
+squash_raw(x)
+
+flatten_if(x, predicate = is_spliced)
+
+squash_if(x, predicate = is_spliced)
+}
+\arguments{
+\item{x}{A list of flatten or squash. The contents of the list can
+be anything for unsuffixed functions \code{flatten()} and \code{squash()}
+(as a list is returned), but the contents must match the type for
+the other functions.}
+
+\item{predicate}{A function of one argument returning whether it
+should be spliced.}
+}
+\value{
+\code{flatten()} returns a list, \code{flatten_lgl()} a logical
+vector, \code{flatten_int()} an integer vector, \code{flatten_dbl()} a
+double vector, and \code{flatten_chr()} a character vector. Similarly
+for \code{squash()} and the typed variants (\code{squash_lgl()} etc).
+}
+\description{
+\code{flatten()} removes one level hierarchy from a list, while
+\code{squash()} removes all levels. These functions are similar to
+\code{\link[=unlist]{unlist()}} but they are type-stable so you always know what the
+type of the output is.
+}
+\examples{
+x <- replicate(2, sample(4), simplify = FALSE)
+x
+
+flatten(x)
+flatten_int(x)
+
+# With flatten(), only one level gets removed at a time:
+deep <- list(1, list(2, list(3)))
+flatten(deep)
+flatten(flatten(deep))
+
+# But squash() removes all levels:
+squash(deep)
+squash_dbl(deep)
+
+# The typed flattens remove one level and coerce to an atomic
+# vector at the same time:
+flatten_dbl(list(1, list(2)))
+
+# Only bare lists are flattened, but you can splice S3 lists
+# explicitly:
+foo <- set_attrs(list("bar"), class = "foo")
+str(flatten(list(1, foo, list(100))))
+str(flatten(list(1, splice(foo), list(100))))
+
+# Instead of splicing manually, flatten_if() and squash_if() let
+# you specify a predicate function:
+is_foo <- function(x) inherits(x, "foo") || is_bare_list(x)
+str(flatten_if(list(1, foo, list(100)), is_foo))
+
+# squash_if() does the same with deep lists:
+deep_foo <- list(1, list(foo, list(foo, 100)))
+str(deep_foo)
+
+str(squash(deep_foo))
+str(squash_if(deep_foo, is_foo))
+}
diff --git a/man/fn_env.Rd b/man/fn_env.Rd
new file mode 100644
index 0000000..446e970
--- /dev/null
+++ b/man/fn_env.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fn.R
+\name{fn_env}
+\alias{fn_env}
+\alias{fn_env<-}
+\title{Return the closure environment of a function}
+\usage{
+fn_env(fn)
+
+fn_env(x) <- value
+}
+\arguments{
+\item{fn, x}{A function.}
+
+\item{value}{A new closure environment for the function.}
+}
+\description{
+Closure environments define the scope of functions (see \code{\link[=env]{env()}}).
+When a function call is evaluated, R creates an evaluation frame
+(see \code{\link[=ctxt_stack]{ctxt_stack()}}) that inherits from the closure environment.
+This makes all objects defined in the closure environment and all
+its parents available to code executed within the function.
+}
+\details{
+\code{fn_env()} returns the closure environment of \code{fn}. There is also
+an assignment method to set a new closure environment.
+}
+\examples{
+env <- child_env("base")
+fn <- with_env(env, function() NULL)
+identical(fn_env(fn), env)
+
+other_env <- child_env("base")
+fn_env(fn) <- other_env
+identical(fn_env(fn), other_env)
+}
diff --git a/man/fn_fmls.Rd b/man/fn_fmls.Rd
new file mode 100644
index 0000000..d81abbe
--- /dev/null
+++ b/man/fn_fmls.Rd
@@ -0,0 +1,44 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fn.R
+\name{fn_fmls}
+\alias{fn_fmls}
+\alias{fn_fmls_names}
+\alias{fn_fmls_syms}
+\title{Extract arguments from a function}
+\usage{
+fn_fmls(fn = caller_fn())
+
+fn_fmls_names(fn = caller_fn())
+
+fn_fmls_syms(fn = caller_fn())
+}
+\arguments{
+\item{fn}{A function. It is lookep up in the calling frame if not
+supplied.}
+}
+\description{
+\code{fn_fmls()} returns a named list of formal arguments.
+\code{fn_fmls_names()} returns the names of the arguments.
+\code{fn_fmls_syms()} returns formals as a named list of symbols. This
+is especially useful for forwarding arguments in \link[=lang]{constructed
+calls}.
+}
+\details{
+Unlike \code{formals()}, these helpers also work with primitive
+functions. See \code{\link[=is_function]{is_function()}} for a discussion of primitive and
+closure functions.
+}
+\examples{
+# Extract from current call:
+fn <- function(a = 1, b = 2) fn_fmls()
+fn()
+
+# Works with primitive functions:
+fn_fmls(base::switch)
+
+# fn_fmls_syms() makes it easy to forward arguments:
+lang("apply", !!! fn_fmls_syms(lapply))
+}
+\seealso{
+\code{\link[=lang_args]{lang_args()}} and \code{\link[=lang_args_names]{lang_args_names()}}
+}
diff --git a/man/frame_position.Rd b/man/frame_position.Rd
new file mode 100644
index 0000000..12f706c
--- /dev/null
+++ b/man/frame_position.Rd
@@ -0,0 +1,49 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{frame_position}
+\alias{frame_position}
+\title{Find the position or distance of a frame on the evaluation stack}
+\usage{
+frame_position(frame, from = c("global", "current"))
+}
+\arguments{
+\item{frame}{The environment of a frame. Can be any object with a
+\code{\link[=get_env]{get_env()}} method. Note that for frame objects, the position from
+the global frame is simply \code{frame$pos}. Alternatively, \code{frame}
+can be an integer that represents the position on the stack (and
+is thus returned as is if \code{from} is "global".}
+
+\item{from}{Whether to compute distance from the global frame (the
+bottom of the evaluation stack), or from the current frame (the
+top of the evaluation stack).}
+}
+\description{
+The frame position on the stack can be computed by counting frames
+from the global frame (the bottom of the stack, the default) or
+from the current frame (the top of the stack).
+}
+\details{
+While this function returns the position of the frame on the
+evaluation stack, it can safely be called with intervening frames
+as those will be discarded.
+}
+\examples{
+fn <- function() g(environment())
+g <- function(env) frame_position(env)
+
+# frame_position() returns the position of the frame on the evaluation stack:
+fn()
+identity(identity(fn()))
+
+# Note that it trims off intervening calls before counting so you
+# can safely nest it within other calls:
+g <- function(env) identity(identity(frame_position(env)))
+fn()
+
+# You can also ask for the position from the current frame rather
+# than the global frame:
+fn <- function() g(environment())
+g <- function(env) h(env)
+h <- function(env) frame_position(env, from = "current")
+fn()
+}
diff --git a/man/friendly_type.Rd b/man/friendly_type.Rd
new file mode 100644
index 0000000..e26c3d4
--- /dev/null
+++ b/man/friendly_type.Rd
@@ -0,0 +1,23 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{friendly_type}
+\alias{friendly_type}
+\title{Format a type for error messages}
+\usage{
+friendly_type(type)
+}
+\arguments{
+\item{type}{A type as returned by \code{\link[=type_of]{type_of()}} or \code{\link[=lang_type_of]{lang_type_of()}}.}
+}
+\value{
+A string of the prettified type, qualified with an
+indefinite article.
+}
+\description{
+Format a type for error messages
+}
+\examples{
+friendly_type("logical")
+friendly_type("integer")
+friendly_type("string")
+}
diff --git a/man/get_env.Rd b/man/get_env.Rd
new file mode 100644
index 0000000..64a7664
--- /dev/null
+++ b/man/get_env.Rd
@@ -0,0 +1,84 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{get_env}
+\alias{get_env}
+\alias{set_env}
+\title{Get or set the environment of an object}
+\usage{
+get_env(env = caller_env(), default = NULL)
+
+set_env(env, new_env = caller_env())
+}
+\arguments{
+\item{env}{An environment or an object bundling an environment,
+e.g. a formula, \link{quosure} or \link[=is_closure]{closure}.}
+
+\item{default}{The default environment in case \code{env} does not wrap
+an environment. If \code{NULL} and no environment could be extracted,
+an error is issued.}
+
+\item{new_env}{An environment to replace \code{env} with. Can be an
+object handled by \code{get_env()}.}
+}
+\description{
+These functions dispatch internally with methods for functions,
+formulas and frames. If called with a missing argument, the
+environment of the current evaluation frame (see \code{\link[=ctxt_stack]{ctxt_stack()}}) is
+returned. If you call \code{get_env()} with an environment, it acts as
+the identity function and the environment is simply returned (this
+helps simplifying code when writing generic functions for
+environments).
+}
+\examples{
+# Get the environment of frame objects. If no argument is supplied,
+# the current frame is used:
+fn <- function() {
+  list(
+    get_env(call_frame()),
+    get_env()
+  )
+}
+fn()
+
+# Environment of closure functions:
+get_env(fn)
+
+# Or of quosures or formulas:
+get_env(~foo)
+get_env(quo(foo))
+
+
+# Provide a default in case the object doesn't bundle an environment.
+# Let's create an unevaluated formula:
+f <- quote(~foo)
+
+# The following line would fail if run because unevaluated formulas
+# don't bundle an environment (they didn't have the chance to
+# record one yet):
+# get_env(f)
+
+# It is often useful to provide a default when you're writing
+# functions accepting formulas as input:
+default <- env()
+identical(get_env(f, default), default)
+
+# set_env() can be used to set the enclosure of functions and
+# formulas. Let's create a function with a particular environment:
+env <- child_env("base")
+fn <- set_env(function() NULL, env)
+
+# That function now has `env` as enclosure:
+identical(get_env(fn), env)
+identical(get_env(fn), get_env())
+
+# set_env() does not work by side effect. Setting a new environment
+# for fn has no effect on the original function:
+other_env <- child_env(NULL)
+set_env(fn, other_env)
+identical(get_env(fn), other_env)
+
+# Since set_env() returns a new function with a different
+# environment, you'll need to reassign the result:
+fn <- set_env(fn, other_env)
+identical(get_env(fn), other_env)
+}
diff --git a/man/has_length.Rd b/man/has_length.Rd
new file mode 100644
index 0000000..f6b5188
--- /dev/null
+++ b/man/has_length.Rd
@@ -0,0 +1,28 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/attr.R
+\name{has_length}
+\alias{has_length}
+\title{How long is an object?}
+\usage{
+has_length(x, n = NULL)
+}
+\arguments{
+\item{x}{A R object.}
+
+\item{n}{A specific length to test \code{x} with. If \code{NULL},
+\code{has_length()} returns \code{TRUE} if \code{x} has length greater than
+zero, and \code{FALSE} otherwise.}
+}
+\description{
+This is a function for the common task of testing the length of an
+object. It checks the length of an object in a non-generic way:
+\code{\link[base:length]{base::length()}} methods are ignored.
+}
+\examples{
+has_length(list())
+has_length(list(), 0)
+
+has_length(letters)
+has_length(letters, 20)
+has_length(letters, 26)
+}
diff --git a/man/has_name.Rd b/man/has_name.Rd
new file mode 100644
index 0000000..d844449
--- /dev/null
+++ b/man/has_name.Rd
@@ -0,0 +1,28 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/attr.R
+\name{has_name}
+\alias{has_name}
+\title{Does an object have an element with this name?}
+\usage{
+has_name(x, name)
+}
+\arguments{
+\item{x}{A data frame or another named object}
+
+\item{name}{Element name(s) to check}
+}
+\value{
+A logical vector of the same length as \code{name}
+}
+\description{
+This function returns a logical value that indicates if a data frame or
+another named object contains an element with a specific name.
+}
+\details{
+Unnamed objects are treated as if all names are empty strings. \code{NA}
+input gives \code{FALSE} as output.
+}
+\examples{
+has_name(iris, "Species")
+has_name(mtcars, "gears")
+}
diff --git a/man/invoke.Rd b/man/invoke.Rd
new file mode 100644
index 0000000..fb411ff
--- /dev/null
+++ b/man/invoke.Rd
@@ -0,0 +1,62 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval.R
+\name{invoke}
+\alias{invoke}
+\title{Invoke a function with a list of arguments}
+\usage{
+invoke(.fn, .args = list(), ..., .env = caller_env(), .bury = c(".fn",
+  ""))
+}
+\arguments{
+\item{.fn}{A function to invoke. Can be a function object or the
+name of a function in scope of \code{.env}.}
+
+\item{.args, ...}{List of arguments (possibly named) to be passed to
+\code{.fn}.}
+
+\item{.env}{The environment in which to call \code{.fn}.}
+
+\item{.bury}{A character vector of length 2. The first string
+specifies which name should the function have in the call
+recorded in the evaluation stack. The second string specifies a
+prefix for the argument names. Set \code{.bury} to \code{NULL} if you
+prefer to inline the function and its arguments in the call.}
+}
+\description{
+Normally, you invoke a R function by typing arguments manually. A
+powerful alternative is to call a function with a list of arguments
+assembled programmatically. This is the purpose of \code{invoke()}.
+}
+\details{
+Technically, \code{invoke()} is basically a version of \code{\link[base:do.call]{base::do.call()}}
+that creates cleaner call traces because it does not inline the
+function and the arguments in the call (see examples). To achieve
+this, \code{invoke()} creates a child environment of \code{.env} with \code{.fn}
+and all arguments bound to new symbols (see \code{\link[=env_bury]{env_bury()}}). It then
+uses the same strategy as \code{\link[=eval_bare]{eval_bare()}} to evaluate with minimal
+noise.
+}
+\examples{
+# invoke() has the same purpose as do.call():
+invoke(paste, letters)
+
+# But it creates much cleaner calls:
+invoke(call_inspect, mtcars)
+
+# and stacktraces:
+fn <- function(...) sys.calls()
+invoke(fn, list(mtcars))
+
+# Compare to do.call():
+do.call(call_inspect, mtcars)
+do.call(fn, list(mtcars))
+
+
+# Specify the function name either by supplying a string
+# identifying the function (it should be visible in .env):
+invoke("call_inspect", letters)
+
+# Or by changing the .bury argument, with which you can also change
+# the argument prefix:
+invoke(call_inspect, mtcars, .bury = c("inspect!", "col"))
+}
diff --git a/man/is_callable.Rd b/man/is_callable.Rd
new file mode 100644
index 0000000..81d8cfe
--- /dev/null
+++ b/man/is_callable.Rd
@@ -0,0 +1,43 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{is_callable}
+\alias{is_callable}
+\title{Is an object callable?}
+\usage{
+is_callable(x)
+}
+\arguments{
+\item{x}{An object to test.}
+}
+\description{
+A callable object is an object that can be set as the head of a
+\link[=lang_head]{call node}. This includes \link[=is_symbolic]{symbolic
+objects} that evaluate to a function or literal
+functions.
+}
+\details{
+Note that strings may look like callable objects because
+expressions of the form \code{"list"()} are valid R code. However,
+that's only because the R parser transforms strings to symbols. It
+is not legal to manually set language heads to strings.
+}
+\examples{
+# Symbolic objects and functions are callable:
+is_callable(quote(foo))
+is_callable(base::identity)
+
+# mut_node_car() lets you modify calls without any checking:
+lang <- quote(foo(10))
+mut_node_car(lang, get_env())
+
+# Use is_callable() to check an input object is safe to put as CAR:
+obj <- base::identity
+
+if (is_callable(obj)) {
+  lang <- mut_node_car(lang, obj)
+} else {
+  abort("`obj` must be callable")
+}
+
+eval_bare(lang)
+}
diff --git a/man/is_condition.Rd b/man/is_condition.Rd
new file mode 100644
index 0000000..77e2ef1
--- /dev/null
+++ b/man/is_condition.Rd
@@ -0,0 +1,14 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd.R
+\name{is_condition}
+\alias{is_condition}
+\title{Is object a condition?}
+\usage{
+is_condition(x)
+}
+\arguments{
+\item{x}{An object to test.}
+}
+\description{
+Is object a condition?
+}
diff --git a/man/is_copyable.Rd b/man/is_copyable.Rd
new file mode 100644
index 0000000..88a162c
--- /dev/null
+++ b/man/is_copyable.Rd
@@ -0,0 +1,34 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{is_copyable}
+\alias{is_copyable}
+\title{Is an object copyable?}
+\usage{
+is_copyable(x)
+}
+\arguments{
+\item{x}{An object to test.}
+}
+\description{
+When an object is modified, R generally copies it (sometimes
+lazily) to enforce \href{https://en.wikipedia.org/wiki/Value_semantics}{valuesemantics}.
+However, some internal types are uncopyable. If you try to copy
+them, either with \code{<-} or by argument passing, you actually create
+references to the original object rather than actual
+copies. Modifying these references can thus have far reaching side
+effects.
+}
+\examples{
+# Let's add attributes with structure() to uncopyable types. Since
+# they are not copied, the attributes are changed in place:
+env <- env()
+structure(env, foo = "bar")
+env
+
+# These objects that can only be changed with side effect are not
+# copyable:
+is_copyable(env)
+
+structure(base::list, foo = "bar")
+str(base::list)
+}
diff --git a/man/is_empty.Rd b/man/is_empty.Rd
new file mode 100644
index 0000000..aec7f8e
--- /dev/null
+++ b/man/is_empty.Rd
@@ -0,0 +1,19 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{is_empty}
+\alias{is_empty}
+\title{Is object an empty vector or NULL?}
+\usage{
+is_empty(x)
+}
+\arguments{
+\item{x}{object to test}
+}
+\description{
+Is object an empty vector or NULL?
+}
+\examples{
+is_empty(NULL)
+is_empty(list())
+is_empty(list(NULL))
+}
diff --git a/man/is_env.Rd b/man/is_env.Rd
new file mode 100644
index 0000000..5c778a5
--- /dev/null
+++ b/man/is_env.Rd
@@ -0,0 +1,18 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{is_env}
+\alias{is_env}
+\alias{is_bare_env}
+\title{Is object an environment?}
+\usage{
+is_env(x)
+
+is_bare_env(x)
+}
+\arguments{
+\item{x}{object to test}
+}
+\description{
+\code{is_bare_env()} tests whether \code{x} is an environment without a s3 or
+s4 class.
+}
diff --git a/man/is_expr.Rd b/man/is_expr.Rd
new file mode 100644
index 0000000..19b8e71
--- /dev/null
+++ b/man/is_expr.Rd
@@ -0,0 +1,115 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr.R
+\name{is_expr}
+\alias{is_expr}
+\alias{is_syntactic_literal}
+\alias{is_symbolic}
+\title{Is an object an expression?}
+\usage{
+is_expr(x)
+
+is_syntactic_literal(x)
+
+is_symbolic(x)
+}
+\arguments{
+\item{x}{An object to test.}
+}
+\description{
+\code{is_expr()} tests for expressions, the set of objects that can be
+obtained from parsing R code. An expression can be one of two
+things: either a symbolic object (for which \code{is_symbolic()} returns
+\code{TRUE}), or a syntactic literal (testable with
+\code{is_syntactic_literal()}). Technically, calls can contain any R
+object, not necessarily symbolic objects or syntactic
+literals. However, this only happens in artificial
+situations. Expressions as we define them only contain numbers,
+strings, \code{NULL}, symbols, and calls: this is the complete set of R
+objects that can be created when R parses source code (e.g. from
+using \code{\link[=parse_expr]{parse_expr()}}).
+
+Note that we are using the term expression in its colloquial sense
+and not to refer to \code{\link[=expression]{expression()}} vectors, a data type that wraps
+expressions in a vector and which isn't used much in modern R code.
+}
+\details{
+\code{is_symbolic()} returns \code{TRUE} for symbols and calls (objects with
+type \code{language}). Symbolic objects are replaced by their value
+during evaluation. Literals are the complement of symbolic
+objects. They are their own value and return themselves during
+evaluation.
+
+\code{is_syntactic_literal()} is a predicate that returns \code{TRUE} for the
+subset of literals that are created by R when parsing text (see
+\code{\link[=parse_expr]{parse_expr()}}): numbers, strings and \code{NULL}. Along with symbols,
+these literals are the terminating nodes in a parse tree.
+
+Note that in the most general sense, a literal is any R object that
+evaluates to itself and that can be evaluated in the empty
+environment. For instance, \code{quote(c(1, 2))} is not a literal, it is
+a call. However, the result of evaluating it in \code{\link[=base_env]{base_env()}} is a
+literal(in this case an atomic vector).
+
+Pairlists are also a kind of language objects. However, since they
+are mostly an internal data structure, \code{is_expr()} returns \code{FALSE}
+for pairlists. You can use \code{is_pairlist()} to explicitly check for
+them. Pairlists are the data structure for function arguments. They
+usually do not arise from R code because subsetting a call is a
+type-preserving operation. However, you can obtain the pairlist of
+arguments by taking the CDR of the call object from C code. The
+rlang function \code{\link[=lang_tail]{lang_tail()}} will do it from R. Another way in
+which pairlist of arguments arise is by extracting the argument
+list of a closure with \code{\link[base:formals]{base::formals()}} or \code{\link[=fn_fmls]{fn_fmls()}}.
+}
+\examples{
+q1 <- quote(1)
+is_expr(q1)
+is_syntactic_literal(q1)
+
+q2 <- quote(x)
+is_expr(q2)
+is_symbol(q2)
+
+q3 <- quote(x + 1)
+is_expr(q3)
+is_lang(q3)
+
+
+# Atomic expressions are the terminating nodes of a call tree:
+# NULL or a scalar atomic vector:
+is_syntactic_literal("string")
+is_syntactic_literal(NULL)
+
+is_syntactic_literal(letters)
+is_syntactic_literal(quote(call()))
+
+# Parsable literals have the property of being self-quoting:
+identical("foo", quote("foo"))
+identical(1L, quote(1L))
+identical(NULL, quote(NULL))
+
+# Like any literals, they can be evaluated within the empty
+# environment:
+eval_bare(quote(1L), empty_env())
+
+# Whereas it would fail for symbolic expressions:
+# eval_bare(quote(c(1L, 2L)), empty_env())
+
+
+# Pairlists are also language objects representing argument lists.
+# You will usually encounter them with extracted formals:
+fmls <- formals(is_expr)
+typeof(fmls)
+
+# Since they are mostly an internal data structure, is_expr()
+# returns FALSE for pairlists, so you will have to check explicitly
+# for them:
+is_expr(fmls)
+is_pairlist(fmls)
+
+# Note that you can also extract call arguments as a pairlist:
+lang_tail(quote(fn(arg1, arg2 = "foo")))
+}
+\seealso{
+\code{\link[=is_lang]{is_lang()}} for a call predicate.
+}
diff --git a/man/is_formula.Rd b/man/is_formula.Rd
new file mode 100644
index 0000000..0e2705c
--- /dev/null
+++ b/man/is_formula.Rd
@@ -0,0 +1,68 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/formula.R
+\name{is_formula}
+\alias{is_formula}
+\alias{is_bare_formula}
+\alias{is_formulaish}
+\title{Is object a formula?}
+\usage{
+is_formula(x, scoped = NULL, lhs = NULL)
+
+is_bare_formula(x, scoped = NULL, lhs = NULL)
+
+is_formulaish(x, scoped = NULL, lhs = NULL)
+}
+\arguments{
+\item{x}{An object to test.}
+
+\item{scoped}{A boolean indicating whether the quosure or formula
+is scoped, that is, has a valid environment attribute. If \code{NULL},
+the scope is not inspected.}
+
+\item{lhs}{A boolean indicating whether the \link[=is_formula]{formula}
+or \link[=is_definition]{definition} has a left-hand side. If \code{NULL},
+the LHS is not inspected.}
+}
+\description{
+\code{is_formula()} tests if \code{x} is a call to \code{~}. \code{is_bare_formula()}
+tests in addition that \code{x} does not inherit from anything else than
+\code{"formula"}. \code{is_formulaish()} returns \code{TRUE} for both formulas and
+\link[=is_definition]{definitions} of the type \code{a := b}.
+}
+\details{
+The \code{scoped} argument patterns-match on whether the scoped bundled
+with the quosure is valid or not. Invalid scopes may happen in
+nested quotations like \code{~~expr}, where the outer quosure is validly
+scoped but not the inner one. This is because \code{~} saves the
+environment when it is evaluated, and quoted formulas are by
+definition not evaluated.
+}
+\examples{
+x <- disp ~ am
+is_formula(x)
+
+is_formula(~10)
+is_formula(10)
+
+is_formula(quo(foo))
+is_bare_formula(quo(foo))
+
+# Note that unevaluated formulas are treated as bare formulas even
+# though they don't inherit from "formula":
+f <- quote(~foo)
+is_bare_formula(f)
+
+# However you can specify `scoped` if you need the predicate to
+# return FALSE for these unevaluated formulas:
+is_bare_formula(f, scoped = TRUE)
+is_bare_formula(eval(f), scoped = TRUE)
+
+
+# There is also a variant that returns TRUE for definitions in
+# addition to formulas:
+is_formulaish(a ~ b)
+is_formulaish(a := b)
+}
+\seealso{
+\code{\link[=is_quosure]{is_quosure()}} and \code{\link[=is_quosureish]{is_quosureish()}}
+}
diff --git a/man/is_frame.Rd b/man/is_frame.Rd
new file mode 100644
index 0000000..72dcb5e
--- /dev/null
+++ b/man/is_frame.Rd
@@ -0,0 +1,14 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{is_frame}
+\alias{is_frame}
+\title{Is object a frame?}
+\usage{
+is_frame(x)
+}
+\arguments{
+\item{x}{Object to test}
+}
+\description{
+Is object a frame?
+}
diff --git a/man/is_function.Rd b/man/is_function.Rd
new file mode 100644
index 0000000..3978492
--- /dev/null
+++ b/man/is_function.Rd
@@ -0,0 +1,109 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fn.R
+\name{is_function}
+\alias{is_function}
+\alias{is_closure}
+\alias{is_primitive}
+\alias{is_primitive_eager}
+\alias{is_primitive_lazy}
+\title{Is object a function?}
+\usage{
+is_function(x)
+
+is_closure(x)
+
+is_primitive(x)
+
+is_primitive_eager(x)
+
+is_primitive_lazy(x)
+}
+\arguments{
+\item{x}{Object to be tested.}
+}
+\description{
+The R language defines two different types of functions: primitive
+functions, which are low-level, and closures, which are the regular
+kind of functions.
+}
+\details{
+Closures are functions written in R, named after the way their
+arguments are scoped within nested environments (see
+https://en.wikipedia.org/wiki/Closure_(computer_programming)).  The
+root environment of the closure is called the closure
+environment. When closures are evaluated, a new environment called
+the evaluation frame is created with the closure environment as
+parent. This is where the body of the closure is evaluated. These
+closure frames appear on the evaluation stack (see \code{\link[=ctxt_stack]{ctxt_stack()}}),
+as opposed to primitive functions which do not necessarily have
+their own evaluation frame and never appear on the stack.
+
+Primitive functions are more efficient than closures for two
+reasons. First, they are written entirely in fast low-level
+code. Secondly, the mechanism by which they are passed arguments is
+more efficient because they often do not need the full procedure of
+argument matching (dealing with positional versus named arguments,
+partial matching, etc). One practical consequence of the special
+way in which primitives are passed arguments this is that they
+technically do not have formal arguments, and \code{\link[=formals]{formals()}} will
+return \code{NULL} if called on a primitive function. See \code{\link[=fn_fmls]{fn_fmls()}}
+for a function that returns a representation of formal arguments
+for primitive functions. Finally, primitive functions can either
+take arguments lazily, like R closures do, or evaluate them eagerly
+before being passed on to the C code. The former kind of primitives
+are called "special" in R terminology, while the latter is referred
+to as "builtin". \code{is_primitive_eager()} and \code{is_primitive_lazy()}
+allow you to check whether a primitive function evaluates arguments
+eagerly or lazily.
+
+You will also encounter the distinction between primitive and
+internal functions in technical documentation. Like primitive
+functions, internal functions are defined at a low level and
+written in C. However, internal functions have no representation in
+the R language. Instead, they are called via a call to
+\code{\link[base:.Internal]{base::.Internal()}} within a regular closure. This ensures that
+they appear as normal R function objects: they obey all the usual
+rules of argument passing, and they appear on the evaluation stack
+as any other closures. As a result, \code{\link[=fn_fmls]{fn_fmls()}} does not need to
+look in the \code{.ArgsEnv} environment to obtain a representation of
+their arguments, and there is no way of querying from R whether
+they are lazy ('special' in R terminology) or eager ('builtin').
+
+You can call primitive functions with \code{\link[=.Primitive]{.Primitive()}} and internal
+functions with \code{\link[=.Internal]{.Internal()}}. However, calling internal functions
+in a package is forbidden by CRAN's policy because they are
+considered part of the private API. They often assume that they
+have been called with correctly formed arguments, and may cause R
+to crash if you call them with unexpected objects.
+}
+\examples{
+# Primitive functions are not closures:
+is_closure(base::c)
+is_primitive(base::c)
+
+# On the other hand, internal functions are wrapped in a closure
+# and appear as such from the R side:
+is_closure(base::eval)
+
+# Both closures and primitives are functions:
+is_function(base::c)
+is_function(base::eval)
+
+# Primitive functions never appear in evaluation stacks:
+is_primitive(base::`[[`)
+is_primitive(base::list)
+list(ctxt_stack())[[1]]
+
+# While closures do:
+identity(identity(ctxt_stack()))
+
+# Many primitive functions evaluate arguments eagerly:
+is_primitive_eager(base::c)
+is_primitive_eager(base::list)
+is_primitive_eager(base::`+`)
+
+# However, primitives that operate on expressions, like quote() or
+# substitute(), are lazy:
+is_primitive_lazy(base::quote)
+is_primitive_lazy(base::substitute)
+}
diff --git a/man/is_installed.Rd b/man/is_installed.Rd
new file mode 100644
index 0000000..b9fb562
--- /dev/null
+++ b/man/is_installed.Rd
@@ -0,0 +1,22 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{is_installed}
+\alias{is_installed}
+\title{Is a package installed in the library?}
+\usage{
+is_installed(pkg)
+}
+\arguments{
+\item{pkg}{The name of a package.}
+}
+\value{
+\code{TRUE} if the package is installed, \code{FALSE} otherwise.
+}
+\description{
+This checks that a package is installed with minimal side effects.
+If installed, the package will be loaded but not attached.
+}
+\examples{
+is_installed("utils")
+is_installed("ggplot5")
+}
diff --git a/man/is_integerish.Rd b/man/is_integerish.Rd
new file mode 100644
index 0000000..88358ac
--- /dev/null
+++ b/man/is_integerish.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{is_integerish}
+\alias{is_integerish}
+\alias{is_bare_integerish}
+\alias{is_scalar_integerish}
+\title{Is a vector integer-like?}
+\usage{
+is_integerish(x, n = NULL)
+
+is_bare_integerish(x, n = NULL)
+
+is_scalar_integerish(x)
+}
+\arguments{
+\item{x}{Object to be tested.}
+
+\item{n}{Expected length of a vector.}
+}
+\description{
+These predicates check whether R considers a number vector to be
+integer-like, according to its own tolerance check (which is in
+fact delegated to the C library). This function is not adapted to
+data analysis, see the help for \code{\link[base:is.integer]{base::is.integer()}} for examples
+of how to check for whole numbers.
+}
+\examples{
+is_integerish(10L)
+is_integerish(10.0)
+is_integerish(10.000001)
+is_integerish(TRUE)
+}
+\seealso{
+\code{\link[=is_bare_numeric]{is_bare_numeric()}} for testing whether an object is a
+base numeric type (a bare double or integer vector).
+}
diff --git a/man/is_lang.Rd b/man/is_lang.Rd
new file mode 100644
index 0000000..16e125a
--- /dev/null
+++ b/man/is_lang.Rd
@@ -0,0 +1,80 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{is_lang}
+\alias{is_lang}
+\alias{is_unary_lang}
+\alias{is_binary_lang}
+\title{Is object a call?}
+\usage{
+is_lang(x, name = NULL, n = NULL, ns = NULL)
+
+is_unary_lang(x, name = NULL, ns = NULL)
+
+is_binary_lang(x, name = NULL, ns = NULL)
+}
+\arguments{
+\item{x}{An object to test. If a formula, the right-hand side is
+extracted.}
+
+\item{name}{An optional name that the call should match. It is
+passed to \code{\link[=sym]{sym()}} before matching. This argument is vectorised
+and you can supply a vector of names to match. In this case,
+\code{is_lang()} returns \code{TRUE} if at least one name matches.}
+
+\item{n}{An optional number of arguments that the call should
+match.}
+
+\item{ns}{The namespace of the call. If \code{NULL}, the namespace
+doesn't participate in the pattern-matching. If an empty string
+\code{""} and \code{x} is a namespaced call, \code{is_lang()} returns
+\code{FALSE}. If any other string, \code{is_lang()} checks that \code{x} is
+namespaced within \code{ns}.}
+}
+\description{
+This function tests if \code{x} is a call (or \link[=lang]{language
+object}). This is a pattern-matching predicate that will
+return \code{FALSE} if \code{name} and \code{n} are supplied and the call does not
+match these properties. \code{is_unary_lang()} and \code{is_binary_lang()}
+hardcode \code{n} to 1 and 2.
+}
+\examples{
+is_lang(quote(foo(bar)))
+
+# You can pattern-match the call with additional arguments:
+is_lang(quote(foo(bar)), "foo")
+is_lang(quote(foo(bar)), "bar")
+is_lang(quote(foo(bar)), quote(foo))
+
+# Match the number of arguments with is_lang():
+is_lang(quote(foo(bar)), "foo", 1)
+is_lang(quote(foo(bar)), "foo", 2)
+
+# Or more specifically:
+is_unary_lang(quote(foo(bar)))
+is_unary_lang(quote(+3))
+is_unary_lang(quote(1 + 3))
+is_binary_lang(quote(1 + 3))
+
+
+# By default, namespaced calls are tested unqualified:
+ns_expr <- quote(base::list())
+is_lang(ns_expr, "list")
+
+# You can also specify whether the call shouldn't be namespaced by
+# supplying an empty string:
+is_lang(ns_expr, "list", ns = "")
+
+# Or if it should have a namespace:
+is_lang(ns_expr, "list", ns = "utils")
+is_lang(ns_expr, "list", ns = "base")
+
+
+# The name argument is vectorised so you can supply a list of names
+# to match with:
+is_lang(quote(foo(bar)), c("bar", "baz"))
+is_lang(quote(foo(bar)), c("bar", "foo"))
+is_lang(quote(base::list), c("::", ":::", "$", "@"))
+}
+\seealso{
+\code{\link[=is_expr]{is_expr()}}
+}
diff --git a/man/is_named.Rd b/man/is_named.Rd
new file mode 100644
index 0000000..8869d12
--- /dev/null
+++ b/man/is_named.Rd
@@ -0,0 +1,68 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/attr.R
+\name{is_named}
+\alias{is_named}
+\alias{is_dictionaryish}
+\alias{have_name}
+\title{Is object named?}
+\usage{
+is_named(x)
+
+is_dictionaryish(x)
+
+have_name(x)
+}
+\arguments{
+\item{x}{An object to test.}
+}
+\value{
+\code{is_named()} and \code{is_dictionaryish()} are scalar predicates
+and return \code{TRUE} or \code{FALSE}. \code{have_name()} is vectorised and
+returns a logical vector as long as the input.
+}
+\description{
+\code{is_named()} checks that \code{x} has names attributes, and that none of
+the names are missing or empty (\code{NA} or \code{""}). \code{is_dictionaryish()}
+checks that an object is a dictionary: that it has actual names and
+in addition that there are no duplicated names. \code{have_name()}
+is a vectorised version of \code{is_named()}.
+}
+\examples{
+# A data frame usually has valid, unique names
+is_named(mtcars)
+have_name(mtcars)
+is_dictionaryish(mtcars)
+
+# But data frames can also have duplicated columns:
+dups <- cbind(mtcars, cyl = seq_len(nrow(mtcars)))
+is_dictionaryish(dups)
+
+# The names are still valid:
+is_named(dups)
+have_name(dups)
+
+
+# For empty objects the semantics are slightly different.
+# is_dictionaryish() returns TRUE for empty objects:
+is_dictionaryish(list())
+
+# But is_named() will only return TRUE if there is a names
+# attribute (a zero-length character vector in this case):
+x <- set_names(list(), character(0))
+is_named(x)
+
+
+# Empty and missing names are invalid:
+invalid <- dups
+names(invalid)[2] <- ""
+names(invalid)[5] <- NA
+
+# is_named() performs a global check while have_name() can show you
+# where the problem is:
+is_named(invalid)
+have_name(invalid)
+
+# have_name() will work even with vectors that don't have a names
+# attribute:
+have_name(letters)
+}
diff --git a/man/is_pairlist.Rd b/man/is_pairlist.Rd
new file mode 100644
index 0000000..7899970
--- /dev/null
+++ b/man/is_pairlist.Rd
@@ -0,0 +1,29 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-node.R
+\name{is_pairlist}
+\alias{is_pairlist}
+\alias{is_node}
+\title{Is object a node or pairlist?}
+\usage{
+is_pairlist(x)
+
+is_node(x)
+}
+\arguments{
+\item{x}{Object to test.}
+}
+\description{
+\itemize{
+\item \code{is_pairlist()} checks that \code{x} has type \code{pairlist} or \code{NULL}.
+\code{NULL} is treated as a pairlist because it is the terminating
+node of pairlists and an empty pairlist is thus the \code{NULL}
+object itself.
+\item \code{is_node()} checks that \code{x} has type \code{pairlist}.
+}
+
+In other words, \code{is_pairlist()} tests for the data structure while
+\code{is_node()} tests for the internal type.
+}
+\seealso{
+\code{\link[=is_lang]{is_lang()}} tests for language nodes.
+}
diff --git a/man/is_quosure.Rd b/man/is_quosure.Rd
new file mode 100644
index 0000000..ca02caf
--- /dev/null
+++ b/man/is_quosure.Rd
@@ -0,0 +1,44 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo.R
+\name{is_quosure}
+\alias{is_quosure}
+\alias{is_quosureish}
+\title{Is an object a quosure or quosure-like?}
+\usage{
+is_quosure(x)
+
+is_quosureish(x, scoped = NULL)
+}
+\arguments{
+\item{x}{An object to test.}
+
+\item{scoped}{A boolean indicating whether the quosure or formula
+is scoped, that is, has a valid environment attribute. If \code{NULL},
+the scope is not inspected.}
+}
+\description{
+These predicates test for \link{quosure} objects.
+\itemize{
+\item \code{is_quosure()} tests for a tidyeval quosure. These are one-sided
+formulas with a \code{quosure} class.
+\item \code{is_quosureish()} tests for general R quosure objects: quosures,
+or one-sided formulas.
+}
+}
+\examples{
+# Quosures are created with quo():
+quo(foo)
+is_quosure(quo(foo))
+
+# Formulas look similar to quosures but are not quosures:
+is_quosure(~foo)
+
+# But they are quosureish:
+is_quosureish(~foo)
+
+# Note that two-sided formulas are never quosureish:
+is_quosureish(a ~ b)
+}
+\seealso{
+\code{\link[=is_formula]{is_formula()}} and \code{\link[=is_formulaish]{is_formulaish()}}
+}
diff --git a/man/is_stack.Rd b/man/is_stack.Rd
new file mode 100644
index 0000000..7db0516
--- /dev/null
+++ b/man/is_stack.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{is_stack}
+\alias{is_stack}
+\alias{is_eval_stack}
+\alias{is_call_stack}
+\title{Is object a stack?}
+\usage{
+is_stack(x)
+
+is_eval_stack(x)
+
+is_call_stack(x)
+}
+\arguments{
+\item{x}{An object to test}
+}
+\description{
+Is object a stack?
+}
diff --git a/man/is_symbol.Rd b/man/is_symbol.Rd
new file mode 100644
index 0000000..a315bb3
--- /dev/null
+++ b/man/is_symbol.Rd
@@ -0,0 +1,14 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-sym.R
+\name{is_symbol}
+\alias{is_symbol}
+\title{Is object a symbol?}
+\usage{
+is_symbol(x)
+}
+\arguments{
+\item{x}{An object to test.}
+}
+\description{
+Is object a symbol?
+}
diff --git a/man/is_true.Rd b/man/is_true.Rd
new file mode 100644
index 0000000..1460bd4
--- /dev/null
+++ b/man/is_true.Rd
@@ -0,0 +1,25 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{is_true}
+\alias{is_true}
+\alias{is_false}
+\title{Is object identical to TRUE or FALSE?}
+\usage{
+is_true(x)
+
+is_false(x)
+}
+\arguments{
+\item{x}{object to test}
+}
+\description{
+These functions bypass R's automatic conversion rules and check
+that \code{x} is literally \code{TRUE} or \code{FALSE}.
+}
+\examples{
+is_true(TRUE)
+is_true(1)
+
+is_false(FALSE)
+is_false(0)
+}
diff --git a/man/lang.Rd b/man/lang.Rd
new file mode 100644
index 0000000..3fbb4b3
--- /dev/null
+++ b/man/lang.Rd
@@ -0,0 +1,97 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang}
+\alias{lang}
+\alias{new_language}
+\title{Create a call}
+\usage{
+lang(.fn, ..., .ns = NULL)
+
+new_language(head, tail = NULL)
+}
+\arguments{
+\item{.fn}{Function to call. Must be a callable object: a string,
+symbol, call, or a function.}
+
+\item{...}{Arguments to the call either in or out of a list. Dots
+are evaluated with \link[=dots_list]{explicit splicing}.}
+
+\item{.ns}{Namespace with which to prefix \code{.fn}. Must be a string
+or symbol.}
+
+\item{head}{A \link[=is_callable]{callable} object: a symbol, call, or
+literal function.}
+
+\item{tail}{A \link{pairlist} of arguments.}
+}
+\description{
+Language objects are (with symbols) one of the two types of
+\link[=is_symbolic]{symbolic} objects in R. These symbolic objects form
+the backbone of \link[=is_expr]{expressions}. They represent a value,
+unlike literal objects which are their own values. While symbols
+are directly \link[=env_bind]{bound} to a value, language objects
+represent \emph{function calls}, which is why they are commonly referred
+to as calls.
+\itemize{
+\item \code{lang()} creates a call from a function name (or a literal
+function to inline in the call) and a list of arguments.
+\item \code{new_language()} is bare-bones and takes a head and a tail. The
+head must be \link[=is_callable]{callable} and the tail must be a
+\link{pairlist}. See section on calls as parse trees below. This
+constructor is useful to avoid costly coercions between lists and
+pairlists of arguments.
+}
+}
+\section{Calls as parse tree}{
+
+
+Language objects are structurally identical to
+\link[=pairlist]{pairlists}. They are containers of two objects, the head
+and the tail (also called the CAR and the CDR).
+\itemize{
+\item The head contains the function to call, either literally or
+symbolically. If a literal function, the call is said to be
+inlined. If a symbol, the call is named. If another call, it is
+recursive. \code{foo()()} would be an example of a recursive call
+whose head contains another call. See \code{\link[=lang_type_of]{lang_type_of()}} and
+\code{\link[=is_callable]{is_callable()}}.
+\item The tail contains the arguments and must be a \link{pairlist}.
+}
+
+You can retrieve those components with \code{\link[=lang_head]{lang_head()}} and
+\code{\link[=lang_tail]{lang_tail()}}. Since language nodes can contain other nodes (either
+calls or pairlists), they are capable of forming a tree. When R
+\link[=parse_expr]{parses} an expression, it saves the parse tree in a
+data structure composed of language and pairlist nodes. It is
+precisely because the parse tree is saved in first-class R objects
+that it is possible for functions to \link[=expr]{capture} their
+arguments unevaluated.
+}
+
+\section{Call versus language}{
+
+
+\code{call} is the old S \link[base:mode]{mode} of these objects while
+\code{language} is the R \link[base:typeof]{type}. While it is usually
+better to avoid using S terminology, it would probably be even more
+confusing to systematically refer to "calls" as "language". rlang
+still uses \code{lang} as particle for function dealing with calls for
+consistency.
+}
+
+\examples{
+# fn can either be a string, a symbol or a call
+lang("f", a = 1)
+lang(quote(f), a = 1)
+lang(quote(f()), a = 1)
+
+#' Can supply arguments individually or in a list
+lang(quote(f), a = 1, b = 2)
+lang(quote(f), splice(list(a = 1, b = 2)))
+
+# Creating namespaced calls:
+lang("fun", arg = quote(baz), .ns = "mypkg")
+}
+\seealso{
+lang_modify
+}
diff --git a/man/lang_args.Rd b/man/lang_args.Rd
new file mode 100644
index 0000000..39bce0c
--- /dev/null
+++ b/man/lang_args.Rd
@@ -0,0 +1,44 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang_args}
+\alias{lang_args}
+\alias{lang_args_names}
+\title{Extract arguments from a call}
+\usage{
+lang_args(lang)
+
+lang_args_names(lang)
+}
+\arguments{
+\item{lang}{Can be a call (language object), a formula quoting a
+call in the right-hand side, or a frame object from which to
+extract the call expression.}
+}
+\value{
+A named list of arguments.
+}
+\description{
+Extract arguments from a call
+}
+\examples{
+call <- quote(f(a, b))
+
+# Subsetting a call returns the arguments converted to a language
+# object:
+call[-1]
+
+# See also lang_tail() which returns the arguments without
+# conversion as the original pairlist:
+str(lang_tail(call))
+
+# On the other hand, lang_args() returns a regular list that is
+# often easier to work with:
+str(lang_args(call))
+
+# When the arguments are unnamed, a vector of empty strings is
+# supplied (rather than NULL):
+lang_args_names(call)
+}
+\seealso{
+\code{\link[=lang_tail]{lang_tail()}}, \code{\link[=fn_fmls]{fn_fmls()}} and \code{\link[=fn_fmls_names]{fn_fmls_names()}}
+}
diff --git a/man/lang_fn.Rd b/man/lang_fn.Rd
new file mode 100644
index 0000000..d7ade18
--- /dev/null
+++ b/man/lang_fn.Rd
@@ -0,0 +1,30 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang_fn}
+\alias{lang_fn}
+\title{Extract function from a call}
+\usage{
+lang_fn(lang)
+}
+\arguments{
+\item{lang}{Can be a call (language object), a formula quoting a
+call in the right-hand side, or a frame object from which to
+extract the call expression.}
+}
+\description{
+If a frame or formula, the function will be retrieved from the
+associated environment. Otherwise, it is looked up in the calling
+frame.
+}
+\examples{
+# Extract from a quoted call:
+lang_fn(~matrix())
+lang_fn(quote(matrix()))
+
+# Extract the calling function
+test <- function() lang_fn(call_frame())
+test()
+}
+\seealso{
+\code{\link[=lang_name]{lang_name()}}
+}
diff --git a/man/lang_head.Rd b/man/lang_head.Rd
new file mode 100644
index 0000000..3deb64f
--- /dev/null
+++ b/man/lang_head.Rd
@@ -0,0 +1,37 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang_head}
+\alias{lang_head}
+\alias{lang_tail}
+\title{Return the head or tail of a call object}
+\usage{
+lang_head(lang)
+
+lang_tail(lang)
+}
+\arguments{
+\item{lang}{Can be a call (language object), a formula quoting a
+call in the right-hand side, or a frame object from which to
+extract the call expression.}
+}
+\description{
+These functions return the head or the tail of a call. See section
+on calls as parse trees in \code{\link[=lang]{lang()}}. They are equivalent to
+\code{\link[=node_car]{node_car()}} and \code{\link[=node_cdr]{node_cdr()}} but support quosures and check that
+the input is indeed a call before retrieving the head or tail (it
+is unsafe to do this without type checking).
+
+\code{lang_head()} returns the head of the call without any conversion,
+unlike \code{\link[=lang_name]{lang_name()}} which checks that the head is a symbol and
+converts it to a string. \code{lang_tail()} returns the pairlist of
+arguments (while \code{\link[=lang_args]{lang_args()}} returns the same object converted to
+a regular list)
+}
+\examples{
+lang <- quote(foo(bar, baz))
+lang_head(lang)
+lang_tail(lang)
+}
+\seealso{
+\link{pairlist}, \code{\link[=lang_args]{lang_args()}}, \code{\link[=lang]{lang()}}
+}
diff --git a/man/lang_modify.Rd b/man/lang_modify.Rd
new file mode 100644
index 0000000..35980d2
--- /dev/null
+++ b/man/lang_modify.Rd
@@ -0,0 +1,62 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang_modify}
+\alias{lang_modify}
+\title{Modify the arguments of a call}
+\usage{
+lang_modify(.lang, ..., .standardise = FALSE)
+}
+\arguments{
+\item{.lang}{Can be a call (language object), a formula quoting a
+call in the right-hand side, or a frame object from which to
+extract the call expression.}
+
+\item{...}{Named or unnamed expressions (constants, names or calls)
+used to modify the call. Use \code{NULL} to remove arguments. Dots are
+evaluated with \link[=dots_list]{explicit splicing}.}
+
+\item{.standardise}{If \code{TRUE}, the call is standardised before hand
+to match existing unnamed arguments to their argument names. This
+prevents new named arguments from accidentally replacing original
+unnamed arguments.}
+}
+\value{
+A quosure if \code{.lang} is a quosure, a call otherwise.
+}
+\description{
+Modify the arguments of a call
+}
+\examples{
+call <- quote(mean(x, na.rm = TRUE))
+
+# Modify an existing argument
+lang_modify(call, na.rm = FALSE)
+lang_modify(call, x = quote(y))
+
+# Remove an argument
+lang_modify(call, na.rm = NULL)
+
+# Add a new argument
+lang_modify(call, trim = 0.1)
+
+# Add an explicit missing argument
+lang_modify(call, na.rm = quote(expr = ))
+
+# Supply a list of new arguments with splice()
+newargs <- list(na.rm = NULL, trim = 0.1)
+lang_modify(call, splice(newargs))
+
+# Supply a call frame to extract the frame expression:
+f <- function(bool = TRUE) {
+  lang_modify(call_frame(), splice(list(bool = FALSE)))
+}
+f()
+
+
+# You can also modify quosures inplace:
+f <- ~matrix(bar)
+lang_modify(f, quote(foo))
+}
+\seealso{
+lang
+}
diff --git a/man/lang_name.Rd b/man/lang_name.Rd
new file mode 100644
index 0000000..0e588b7
--- /dev/null
+++ b/man/lang_name.Rd
@@ -0,0 +1,40 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang_name}
+\alias{lang_name}
+\title{Extract function name of a call}
+\usage{
+lang_name(lang)
+}
+\arguments{
+\item{lang}{Can be a call (language object), a formula quoting a
+call in the right-hand side, or a frame object from which to
+extract the call expression.}
+}
+\value{
+A string with the function name, or \code{NULL} if the function
+is anonymous.
+}
+\description{
+Extract function name of a call
+}
+\examples{
+# Extract the function name from quoted calls:
+lang_name(~foo(bar))
+lang_name(quote(foo(bar)))
+
+# Or from a frame:
+foo <- function(bar) lang_name(call_frame())
+foo(bar)
+
+# Namespaced calls are correctly handled:
+lang_name(~base::matrix(baz))
+
+# Anonymous and subsetted functions return NULL:
+lang_name(~foo$bar())
+lang_name(~foo[[bar]]())
+lang_name(~foo()())
+}
+\seealso{
+\code{\link[=lang_fn]{lang_fn()}}
+}
diff --git a/man/lang_standardise.Rd b/man/lang_standardise.Rd
new file mode 100644
index 0000000..9cf1247
--- /dev/null
+++ b/man/lang_standardise.Rd
@@ -0,0 +1,20 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-lang.R
+\name{lang_standardise}
+\alias{lang_standardise}
+\title{Standardise a call}
+\usage{
+lang_standardise(lang)
+}
+\arguments{
+\item{lang}{Can be a call (language object), a formula quoting a
+call in the right-hand side, or a frame object from which to
+extract the call expression.}
+}
+\value{
+A quosure if \code{.lang} is a quosure, a raw call otherwise.
+}
+\description{
+This is essentially equivalent to \code{\link[base:match.call]{base::match.call()}}, but with
+better handling of primitive functions.
+}
diff --git a/man/missing.Rd b/man/missing.Rd
new file mode 100644
index 0000000..d4475dd
--- /dev/null
+++ b/man/missing.Rd
@@ -0,0 +1,62 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-missing.R
+\docType{data}
+\name{missing}
+\alias{missing}
+\alias{na_lgl}
+\alias{na_int}
+\alias{na_dbl}
+\alias{na_chr}
+\alias{na_cpl}
+\title{Missing values}
+\format{An object of class \code{logical} of length 1.}
+\usage{
+na_lgl
+
+na_int
+
+na_dbl
+
+na_chr
+
+na_cpl
+}
+\description{
+Missing values are represented in R with the general symbol
+\code{NA}. They can be inserted in almost all data containers: all
+atomic vectors except raw vectors can contain missing values. To
+achieve this, R automatically converts the general \code{NA} symbol to a
+typed missing value appropriate for the target vector. The objects
+provided here are aliases for those typed \code{NA} objects.
+}
+\details{
+Typed missing values are necessary because R needs sentinel values
+of the same type (i.e. the same machine representation of the data)
+as the containers into which they are inserted. The official typed
+missing values are \code{NA_integer_}, \code{NA_real_}, \code{NA_character_} and
+\code{NA_complex_}. The missing value for logical vectors is simply the
+default \code{NA}. The aliases provided in rlang are consistently named
+and thus simpler to remember. Also, \code{na_lgl} is provided as an
+alias to \code{NA} that makes intent clearer.
+
+Since \code{na_lgl} is the default \code{NA}, expressions such as \code{c(NA, NA)}
+yield logical vectors as no data is available to give a clue of the
+target type. In the same way, since lists and environments can
+contain any types, expressions like \code{list(NA)} store a logical
+\code{NA}.
+}
+\examples{
+typeof(NA)
+typeof(na_lgl)
+typeof(na_int)
+
+# Note that while the base R missing symbols cannot be overwritten,
+# that's not the case for rlang's aliases:
+na_dbl <- NA
+typeof(na_dbl)
+}
+\seealso{
+The \link{vector-along} family to create typed vectors filled
+with missing values.
+}
+\keyword{datasets}
diff --git a/man/missing_arg.Rd b/man/missing_arg.Rd
new file mode 100644
index 0000000..5ada46f
--- /dev/null
+++ b/man/missing_arg.Rd
@@ -0,0 +1,68 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/arg.R
+\name{missing_arg}
+\alias{missing_arg}
+\alias{is_missing}
+\alias{maybe_missing}
+\title{Generate or handle a missing argument}
+\usage{
+missing_arg()
+
+is_missing(x)
+
+maybe_missing(x)
+}
+\arguments{
+\item{x}{An object that might be the missing argument.}
+}
+\description{
+These functions help using the missing argument as a regular R
+object. It is valid to generate a missing argument and assign it in
+the current environment or in a list. However, once assigned in the
+environment, the missing argument normally cannot be touched.
+\code{maybe_missing()} checks whether the object is the missing
+argument, and regenerate it if needed to prevent R from throwing a
+missing error. In addition, \code{is_missing()} lets you check for a
+missing argument in a larger range of situations than
+\code{\link[base:missing]{base::missing()}} (see examples).
+}
+\examples{
+# The missing argument can be useful to generate calls
+quo(f(x = !! missing_arg()))
+quo(f(x = !! NULL))
+
+
+# It is perfectly valid to generate and assign the missing
+# argument.
+x <- missing_arg()
+l <- list(missing_arg())
+
+# Note that accessing a missing argument contained in a list does
+# not trigger an error:
+l[[1]]
+is.null(l[[1]])
+
+# But if the missing argument is assigned in the current
+# environment, it is no longer possible to touch it. The following
+# lines would all return errors:
+#> x
+#> is.null(x)
+
+# In these cases, you can use maybe_missing() to manipulate an
+# object that might be the missing argument without triggering a
+# missing error:
+maybe_missing(x)
+is.null(maybe_missing(x))
+is_missing(maybe_missing(x))
+
+
+# base::missing() does not work well if you supply an
+# expression. The following lines would throw an error:
+
+#> missing(missing_arg())
+#> missing(l[[1]])
+
+# while is_missing() will work as expected:
+is_missing(missing_arg())
+is_missing(l[[1]])
+}
diff --git a/man/modify.Rd b/man/modify.Rd
new file mode 100644
index 0000000..c92026e
--- /dev/null
+++ b/man/modify.Rd
@@ -0,0 +1,30 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-utils.R
+\name{modify}
+\alias{modify}
+\title{Modify a vector}
+\usage{
+modify(.x, ...)
+}
+\arguments{
+\item{.x}{A vector to modify.}
+
+\item{...}{List of elements to merge into \code{.x}. Named elements
+already existing in \code{.x} are used as replacements. Elements that
+have new or no names are inserted at the end. These dots are
+evaluated with \link[=dots_list]{explicit splicing}.}
+}
+\value{
+A modified vector upcasted to a list.
+}
+\description{
+This function merges a list of arguments into a vector. It always
+returns a list.
+}
+\examples{
+modify(c(1, b = 2, 3), 4, b = "foo")
+
+x <- list(a = 1, b = 2)
+y <- list(b = 3, c = 4)
+modify(x, splice(y))
+}
diff --git a/man/mut_utf8_locale.Rd b/man/mut_utf8_locale.Rd
new file mode 100644
index 0000000..b1e2cee
--- /dev/null
+++ b/man/mut_utf8_locale.Rd
@@ -0,0 +1,46 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-chr.R
+\name{mut_utf8_locale}
+\alias{mut_utf8_locale}
+\alias{mut_latin1_locale}
+\alias{mut_mbcs_locale}
+\title{Set the locale's codeset for testing}
+\usage{
+mut_utf8_locale()
+
+mut_latin1_locale()
+
+mut_mbcs_locale()
+}
+\value{
+The previous locale (invisibly).
+}
+\description{
+Setting a locale's codeset (specifically, the \code{LC_CTYPE} category)
+produces side effects in R's handling of strings. The most
+important of these affects how the R parser marks strings. R has
+specific internal support for latin1 (single-byte encoding) and
+UTF-8 (multi-bytes variable-width encoding) strings. If the locale
+codeset is latin1 or UTF-8, the parser will mark all strings with
+the corresponding encoding. It is important for strings to have
+consistent encoding markers, as they determine a number of internal
+encoding conversions when R or packages handle strings (see
+\code{\link[=set_str_encoding]{set_str_encoding()}} for some examples).
+}
+\details{
+If you are changing the locale encoding for testing purposes, you
+need to be aware that R caches strings and symbols to save
+memory. If you change the locale during an R session, it can lead
+to surprising and difficult to reproduce results. In doubt, restart
+your R session.
+
+Note that these helpers are only provided for testing interactively
+the effects of changing locale codeset. They let you quickly change
+the default text encoding to latin1, UTF-8, or non-UTF-8 MBCS. They
+are not widely tested and do not provide a way of setting the
+language and region of the locale. They have permanent side effects
+and should probably not be used in package examples, unit tests, or
+in the course of a data analysis. Note finally that
+\code{mut_utf8_locale()} will not work on Windows as only latin1 and
+MBCS locales are supported on this OS.
+}
diff --git a/man/names2.Rd b/man/names2.Rd
new file mode 100644
index 0000000..d4dea1b
--- /dev/null
+++ b/man/names2.Rd
@@ -0,0 +1,24 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/attr.R
+\name{names2}
+\alias{names2}
+\title{Get names of a vector}
+\usage{
+names2(x)
+}
+\arguments{
+\item{x}{A vector.}
+}
+\description{
+This names getter always returns a character vector, even when an
+object does not have a \code{names} attribute. In this case, it returns
+a vector of empty names \code{""}. It also standardises missing names to
+\code{""}.
+}
+\examples{
+names2(letters)
+
+# It also takes care of standardising missing names:
+x <- set_names(1:3, c("a", NA, "b"))
+names2(x)
+}
diff --git a/man/new_cnd.Rd b/man/new_cnd.Rd
new file mode 100644
index 0000000..cff96b2
--- /dev/null
+++ b/man/new_cnd.Rd
@@ -0,0 +1,64 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd.R
+\name{new_cnd}
+\alias{new_cnd}
+\alias{cnd_error}
+\alias{cnd_warning}
+\alias{cnd_message}
+\title{Create a condition object}
+\usage{
+new_cnd(.type = NULL, ..., .msg = NULL)
+
+cnd_error(.type = NULL, ..., .msg = NULL)
+
+cnd_warning(.type = NULL, ..., .msg = NULL)
+
+cnd_message(.type = NULL, ..., .msg = NULL)
+}
+\arguments{
+\item{.type}{The condition subclass.}
+
+\item{...}{Named data fields stored inside the condition
+object. These dots are evaluated with \link[=dots_list]{explicit
+splicing}.}
+
+\item{.msg}{A default message to inform the user about the
+condition when it is signalled.}
+}
+\description{
+These constructors make it easy to create subclassed conditions.
+Conditions are objects that power the error system in R. They can
+also be used for passing messages to pre-established handlers.
+}
+\details{
+\code{new_cnd()} creates objects inheriting from \code{condition}. Conditions
+created with \code{cnd_error()}, \code{cnd_warning()} and \code{cnd_message()}
+inherit from \code{error}, \code{warning} or \code{message}.
+}
+\examples{
+# Create a condition inheriting from the s3 type "foo":
+cnd <- new_cnd("foo")
+
+# Signal the condition to potential handlers. This has no effect if no
+# handler is registered to deal with conditions of type "foo":
+cnd_signal(cnd)
+
+# If a relevant handler is on the current evaluation stack, it will be
+# called by cnd_signal():
+with_handlers(cnd_signal(cnd), foo = exiting(function(c) "caught!"))
+
+# Handlers can be thrown or executed inplace. See with_handlers()
+# documentation for more on this.
+
+
+# Note that merely signalling a condition inheriting of "error" is
+# not sufficient to stop a program:
+cnd_signal(cnd_error("my_error"))
+
+# you need to use stop() to signal a critical condition that should
+# terminate the program if not handled:
+# stop(cnd_error("my_error"))
+}
+\seealso{
+\code{\link[=cnd_signal]{cnd_signal()}}, \code{\link[=with_handlers]{with_handlers()}}.
+}
diff --git a/man/new_formula.Rd b/man/new_formula.Rd
new file mode 100644
index 0000000..25024cb
--- /dev/null
+++ b/man/new_formula.Rd
@@ -0,0 +1,26 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/formula.R
+\name{new_formula}
+\alias{new_formula}
+\title{Create a formula}
+\usage{
+new_formula(lhs, rhs, env = caller_env())
+}
+\arguments{
+\item{lhs, rhs}{A call, name, or atomic vector.}
+
+\item{env}{An environment.}
+}
+\value{
+A formula object.
+}
+\description{
+Create a formula
+}
+\examples{
+new_formula(quote(a), quote(b))
+new_formula(NULL, quote(b))
+}
+\seealso{
+\code{\link[=new_quosure]{new_quosure()}}
+}
diff --git a/man/new_function.Rd b/man/new_function.Rd
new file mode 100644
index 0000000..8d9aa9c
--- /dev/null
+++ b/man/new_function.Rd
@@ -0,0 +1,40 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fn.R
+\name{new_function}
+\alias{new_function}
+\title{Create a function}
+\usage{
+new_function(args, body, env = caller_env())
+}
+\arguments{
+\item{args}{A named list of default arguments.  Note that if you
+want arguments that don't have defaults, you'll need to use the
+special function \link{alist}, e.g. \code{alist(a = , b = 1)}}
+
+\item{body}{A language object representing the code inside the
+function. Usually this will be most easily generated with
+\code{\link[base:quote]{base::quote()}}}
+
+\item{env}{The parent environment of the function, defaults to the
+calling environment of \code{make_function}}
+}
+\description{
+This constructs a new function given it's three components:
+list of arguments, body code and parent environment.
+}
+\examples{
+f <- function(x) x + 3
+g <- new_function(alist(x = ), quote(x + 3))
+
+# The components of the functions are identical
+identical(formals(f), formals(g))
+identical(body(f), body(g))
+identical(environment(f), environment(g))
+
+# But the functions are not identical because f has src code reference
+identical(f, g)
+
+attr(f, "srcref") <- NULL
+# Now they are:
+stopifnot(identical(f, g))
+}
diff --git a/man/ns_env.Rd b/man/ns_env.Rd
new file mode 100644
index 0000000..65a3e6b
--- /dev/null
+++ b/man/ns_env.Rd
@@ -0,0 +1,29 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{ns_env}
+\alias{ns_env}
+\alias{ns_imports_env}
+\alias{ns_env_name}
+\title{Get the namespace of a package}
+\usage{
+ns_env(pkg = NULL)
+
+ns_imports_env(pkg = NULL)
+
+ns_env_name(pkg = NULL)
+}
+\arguments{
+\item{pkg}{The name of a package. If \code{NULL}, the surrounding
+namespace is returned, or an error is issued if not called within
+a namespace. If a function, the enclosure of that function is
+checked.}
+}
+\description{
+Namespaces are the environment where all the functions of a package
+live. The parent environments of namespaces are the \code{imports}
+environments, which contain all the functions imported from other
+packages.
+}
+\seealso{
+\code{\link[=pkg_env]{pkg_env()}}
+}
diff --git a/man/op-definition.Rd b/man/op-definition.Rd
new file mode 100644
index 0000000..da14919
--- /dev/null
+++ b/man/op-definition.Rd
@@ -0,0 +1,45 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/operators.R
+\name{op-definition}
+\alias{op-definition}
+\alias{:=}
+\alias{is_definition}
+\alias{new_definition}
+\title{Definition operator}
+\usage{
+":="()
+
+is_definition(x)
+
+new_definition(lhs, rhs, env = caller_env())
+}
+\arguments{
+\item{x}{An object to test.}
+
+\item{lhs, rhs}{Expressions for the LHS and RHS of the definition.}
+
+\item{env}{The evaluation environment bundled with the definition.}
+}
+\description{
+The definition operator is typically used in DSL packages like
+\code{ggvis} and \code{data.table}. It is exported in rlang as a alias to
+\code{~}. This makes it a quoting operator that can be shared between
+packages for computing on the language. Since it effectively
+creates formulas, it is immediately compatible with rlang's
+formulas and interpolation features.
+}
+\examples{
+# This is useful to provide an alternative way of specifying
+# arguments in DSLs:
+fn <- function(...) ..1
+f <- fn(arg := foo(bar) + baz)
+
+is_formula(f)
+f_lhs(f)
+f_rhs(f)
+
+# A predicate is provided to distinguish formulas from the
+# colon-equals operator:
+is_definition(a := b)
+is_definition(a ~ b)
+}
diff --git a/man/op-get-attr.Rd b/man/op-get-attr.Rd
new file mode 100644
index 0000000..59e1a5d
--- /dev/null
+++ b/man/op-get-attr.Rd
@@ -0,0 +1,21 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/operators.R
+\name{op-get-attr}
+\alias{op-get-attr}
+\alias{\%@\%}
+\title{Infix attribute accessor}
+\usage{
+x \%@\% name
+}
+\arguments{
+\item{x}{Object}
+
+\item{name}{Attribute name}
+}
+\description{
+Infix attribute accessor
+}
+\examples{
+factor(1:3) \%@\% "levels"
+mtcars \%@\% "class"
+}
diff --git a/man/op-na-default.Rd b/man/op-na-default.Rd
new file mode 100644
index 0000000..4e0606b
--- /dev/null
+++ b/man/op-na-default.Rd
@@ -0,0 +1,23 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/operators.R
+\name{op-na-default}
+\alias{op-na-default}
+\alias{\%|\%}
+\title{Replace missing values}
+\usage{
+x \%|\% y
+}
+\arguments{
+\item{x, y}{\code{y} for elements of \code{x} that are NA; otherwise, \code{x}.}
+}
+\description{
+This infix function is similar to \code{\%||\%} but is vectorised
+and provides a default value for missing elements. It is faster
+than using \code{\link[base:ifelse]{base::ifelse()}} and does not perform type conversions.
+}
+\examples{
+c("a", "b", NA, "c") \%|\% "default"
+}
+\seealso{
+\link{op-null-default}
+}
diff --git a/man/op-null-default.Rd b/man/op-null-default.Rd
new file mode 100644
index 0000000..31d680e
--- /dev/null
+++ b/man/op-null-default.Rd
@@ -0,0 +1,21 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/operators.R
+\name{op-null-default}
+\alias{op-null-default}
+\alias{\%||\%}
+\title{Default value for \code{NULL}}
+\usage{
+x \%||\% y
+}
+\arguments{
+\item{x, y}{If \code{x} is NULL, will return \code{y}; otherwise returns \code{x}.}
+}
+\description{
+This infix function makes it easy to replace \code{NULL}s with a default
+value. It's inspired by the way that Ruby's or operation (\code{||})
+works.
+}
+\examples{
+1 \%||\% 2
+NULL \%||\% 2
+}
diff --git a/man/pairlist.Rd b/man/pairlist.Rd
new file mode 100644
index 0000000..348d17f
--- /dev/null
+++ b/man/pairlist.Rd
@@ -0,0 +1,152 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-node.R
+\name{pairlist}
+\alias{pairlist}
+\alias{node}
+\alias{node_car}
+\alias{node_cdr}
+\alias{node_caar}
+\alias{node_cadr}
+\alias{node_cdar}
+\alias{node_cddr}
+\alias{mut_node_car}
+\alias{mut_node_cdr}
+\alias{mut_node_caar}
+\alias{mut_node_cadr}
+\alias{mut_node_cdar}
+\alias{mut_node_cddr}
+\alias{node_tag}
+\alias{mut_node_tag}
+\title{Helpers for pairlist and language nodes}
+\usage{
+node(newcar, newcdr)
+
+node_car(x)
+
+node_cdr(x)
+
+node_caar(x)
+
+node_cadr(x)
+
+node_cdar(x)
+
+node_cddr(x)
+
+mut_node_car(x, newcar)
+
+mut_node_cdr(x, newcdr)
+
+mut_node_caar(x, newcar)
+
+mut_node_cadr(x, newcar)
+
+mut_node_cdar(x, newcdr)
+
+mut_node_cddr(x, newcdr)
+
+node_tag(x)
+
+mut_node_tag(x, newtag)
+}
+\arguments{
+\item{newcar, newcdr}{The new CAR or CDR for the node. These can be
+any R objects.}
+
+\item{x}{A language or pairlist node. Note that these functions are
+barebones and do not perform any type checking.}
+
+\item{newtag}{The new tag for the node. This should be a symbol.}
+}
+\value{
+Setters like \code{mut_node_car()} invisibly return \code{x} modified
+in place. Getters return the requested node component.
+}
+\description{
+Like any \href{https://en.wikipedia.org/wiki/Parse_tree}{parse tree}, R
+expressions are structured as trees of nodes. Each node has two
+components: the head and the tail (though technically there is
+actually a third component for argument names, see details). Due to
+R's \href{https://en.wikipedia.org/wiki/CAR_and_CDR}{lisp roots}, the
+head of a node (or cons cell) is called the CAR and the tail is
+called the CDR (pronounced \emph{car} and \emph{cou-der}). While R's ordinary
+subsetting operators have builtin support for indexing into these
+trees and replacing elements, it is sometimes useful to manipulate
+the nodes more directly. This is the purpose of functions like
+\code{node_car()} and \code{mut_node_car()}. They are particularly useful to
+prototype algorithms for your C-level functions.
+\itemize{
+\item \code{node_car()} and \code{mut_node_car()} access or change the head of a node.
+\item \code{node_cdr()} and \code{mut_node_cdr()} access or change the tail of a node.
+\item Variants like \code{node_caar()} or \code{mut_node_cdar()} deal with the
+CAR of the CAR of a node or the CDR of the CAR of a node
+respectively. The letters in the middle indicate the type (CAR or
+CDR) and order of access.
+\item \code{node_tag()} and \code{mut_node_tag()} access or change the tag of a
+node. This is meant for argument names and should only contain
+symbols (not strings).
+\item \code{node()} creates a new node from two components.
+}
+}
+\details{
+R has two types of nodes to represent parse trees: language nodes,
+which represent function calls, and pairlist nodes, which represent
+arguments in a function call. These are the exact same data
+structures with a different name. This distinction is helpful for
+parsing the tree: the top-level node of a function call always has
+\emph{language} type while its arguments have \emph{pairlist} type.
+
+Note that it is risky to manipulate calls at the node level. First,
+the calls are changed inplace. This is unlike base R operators
+which create a new copy of the language tree for each modification.
+To make sure modifying a language object does not produce
+side-effects, rlang exports the \code{duplicate()} function to create
+deep copy (or optionally a shallow copy, i.e. only the top-level
+node is copied). The second danger is that R expects language trees
+to be structured as a \code{NULL}-terminated list. The CAR of a node is
+a data slot and can contain anything, including another node (which
+is how you form trees, as opposed to mere linked lists). On the
+other hand, the CDR has to be either another node, or \code{NULL}. If it
+is terminated by anything other than the \code{NULL} object, many R
+commands will crash, including functions like \code{str()}. It is up to
+you to ensure that the language list you have modified is
+\code{NULL}-terminated.
+
+Finally, all nodes can contain metadata in the TAG slot. This is
+meant for argument names and R expects tags to contain a symbol
+(not a string).
+}
+\examples{
+# Changing a node component happens in place and can have side
+# effects. Let's create a language object and a copy of it:
+lang <- quote(foo(bar))
+copy <- lang
+
+# Using R's builtin operators to change the language tree does not
+# create side effects:
+copy[[2]] <- quote(baz)
+copy
+lang
+
+# On the other hand, the CAR and CDR operators operate in-place. Let's
+# create new objects since the previous examples triggered a copy:
+lang <- quote(foo(bar))
+copy <- lang
+
+# Now we change the argument pairlist of `copy`, making sure the new
+# arguments are NULL-terminated:
+mut_node_cdr(copy, node(quote(BAZ), NULL))
+
+# Or equivalently:
+mut_node_cdr(copy, pairlist(quote(BAZ)))
+copy
+
+# The original object has been changed in place:
+lang
+}
+\seealso{
+\code{\link[=duplicate]{duplicate()}} for creating copy-safe objects,
+\code{\link[=lang_head]{lang_head()}} and \code{\link[=lang_tail]{lang_tail()}} as slightly higher level
+alternatives that check their input, and \code{\link[base:pairlist]{base::pairlist()}} for
+an easier way of creating a linked list of nodes.
+}
diff --git a/man/parse_expr.Rd b/man/parse_expr.Rd
new file mode 100644
index 0000000..f4aaf35
--- /dev/null
+++ b/man/parse_expr.Rd
@@ -0,0 +1,75 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/parse.R
+\name{parse_expr}
+\alias{parse_expr}
+\alias{parse_exprs}
+\alias{parse_quosure}
+\alias{parse_quosures}
+\title{Parse R code}
+\usage{
+parse_expr(x)
+
+parse_exprs(x)
+
+parse_quosure(x, env = caller_env())
+
+parse_quosures(x, env = caller_env())
+}
+\arguments{
+\item{x}{Text containing expressions to parse_expr for
+\code{parse_expr()} and \code{parse_exprs()}. Can also be an R connection,
+for instance to a file. If the supplied connection is not open,
+it will be automatically closed and destroyed.}
+
+\item{env}{The environment for the formulas. Defaults to the
+context in which the parse_expr function was called. Can be any
+object with a \code{as_env()} method.}
+}
+\value{
+\code{parse_expr()} returns a formula, \code{parse_exprs()} returns a
+list of formulas.
+}
+\description{
+These functions parse and transform text into R expressions. This
+is the first step to interpret or evaluate a piece of R code
+written by a programmer.
+}
+\details{
+\code{parse_expr()} returns one expression. If the text contains more
+than one expression (separated by colons or new lines), an error is
+issued. On the other hand \code{parse_exprs()} can handle multiple
+expressions. It always returns a list of expressions (compare to
+\code{\link[base:parse]{base::parse()}} which returns an base::expression vector). All
+functions also support R connections.
+
+The versions prefixed with \code{f_} return expressions quoted in
+formulas rather than raw expressions.
+}
+\examples{
+# parse_expr() can parse_expr any R expression:
+parse_expr("mtcars \%>\% dplyr::mutate(cyl_prime = cyl / sd(cyl))")
+
+# A string can contain several expressions separated by ; or \\n
+parse_exprs("NULL; list()\\n foo(bar)")
+
+# The versions suffixed with _f return formulas:
+parse_quosure("foo \%>\% bar()")
+parse_quosures("1; 2; mtcars")
+
+# The env argument is passed to as_env(). It can be e.g. a string
+# representing a scoped package environment:
+parse_quosure("identity(letters)", env = empty_env())
+parse_quosures("identity(letters); mtcars", env = "base")
+
+
+# You can also parse source files by passing a R connection. Let's
+# create a file containing R code:
+path <- tempfile("my-file.R")
+cat("1; 2; mtcars", file = path)
+
+# We can now parse it by supplying a connection:
+parse_exprs(file(path))
+}
+\seealso{
+\code{\link[base:parse]{base::parse()}}
+}
diff --git a/man/prepend.Rd b/man/prepend.Rd
new file mode 100644
index 0000000..be6bc72
--- /dev/null
+++ b/man/prepend.Rd
@@ -0,0 +1,31 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-utils.R
+\name{prepend}
+\alias{prepend}
+\title{Prepend a vector}
+\usage{
+prepend(x, values, before = 1)
+}
+\arguments{
+\item{x}{the vector to be modified.}
+
+\item{values}{to be included in the modified vector.}
+
+\item{before}{a subscript, before which the values are to be appended.}
+}
+\value{
+A merged vector.
+}
+\description{
+This is a companion to \code{\link[base:append]{base::append()}} to help merging two lists
+or atomic vectors. \code{prepend()} is a clearer semantic signal than
+\code{c()} that a vector is to be merged at the beginning of another,
+especially in a pipe chain.
+}
+\examples{
+x <- as.list(1:3)
+
+append(x, "a")
+prepend(x, "a")
+prepend(x, list("a", "b"), before = 3)
+}
diff --git a/man/prim_name.Rd b/man/prim_name.Rd
new file mode 100644
index 0000000..9777a68
--- /dev/null
+++ b/man/prim_name.Rd
@@ -0,0 +1,14 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/fn.R
+\name{prim_name}
+\alias{prim_name}
+\title{Name of a primitive function}
+\usage{
+prim_name(prim)
+}
+\arguments{
+\item{prim}{A primitive function such as \code{\link[base:c]{base::c()}}.}
+}
+\description{
+Name of a primitive function
+}
diff --git a/man/quasiquotation.Rd b/man/quasiquotation.Rd
new file mode 100644
index 0000000..18932c4
--- /dev/null
+++ b/man/quasiquotation.Rd
@@ -0,0 +1,112 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo-unquote.R
+\name{quasiquotation}
+\alias{quasiquotation}
+\alias{UQ}
+\alias{UQE}
+\alias{UQS}
+\alias{UQ}
+\alias{UQE}
+\alias{!!}
+\alias{UQS}
+\alias{!!!}
+\title{Quasiquotation of an expression}
+\usage{
+UQ(x)
+
+UQE(x)
+
+"!!"(x)
+
+UQS(x)
+}
+\arguments{
+\item{x}{An expression to unquote.}
+}
+\description{
+Quasiquotation is the mechanism that makes it possible to program
+flexibly with
+\href{http://rlang.tidyverse.org/articles/tidy-evaluation.html}{tidyeval}
+grammars like dplyr. It is enabled in all tidyeval functions, the
+most fundamental of which are \code{\link[=quo]{quo()}} and \code{\link[=expr]{expr()}}.
+
+Quasiquotation is the combination of quoting an expression while
+allowing immediate evaluation (unquoting) of part of that
+expression. We provide both syntactic operators and functional
+forms for unquoting.
+\itemize{
+\item \code{UQ()} and the \code{!!} operator unquote their argument. It gets
+evaluated immediately in the surrounding context.
+\item \code{UQE()} is like \code{UQ()} but retrieves the expression of
+\link[=is_quosureish]{quosureish} objects. It is a shortcut for \code{!! get_expr(x)}. Use this with care: it is potentially unsafe to
+discard the environment of the quosure.
+\item \code{UQS()} and the \code{!!!} operators unquote and splice their
+argument. The argument should evaluate to a vector or an
+expression. Each component of the vector is embedded as its own
+argument in the surrounding call. If the vector is named, the
+names are used as argument names.
+}
+}
+\section{Theory}{
+
+
+Formally, \code{quo()} and \code{expr()} are quasiquote functions, \code{UQ()} is
+the unquote operator, and \code{UQS()} is the unquote splice operator.
+These terms have a rich history in Lisp languages, and live on in
+modern languages like
+\href{https://docs.julialang.org/en/stable/manual/metaprogramming/}{Julia}
+and
+\href{https://docs.racket-lang.org/reference/quasiquote.html}{Racket}.
+}
+
+\examples{
+# Quasiquotation functions act like base::quote()
+quote(foo(bar))
+expr(foo(bar))
+quo(foo(bar))
+
+# In addition, they support unquoting:
+expr(foo(UQ(1 + 2)))
+expr(foo(!! 1 + 2))
+quo(foo(!! 1 + 2))
+
+# The !! operator is a handy syntactic shortcut for unquoting with
+# UQ().  However you need to be a bit careful with operator
+# precedence. All arithmetic and comparison operators bind more
+# tightly than `!`:
+quo(1 +  !! (1 + 2 + 3) + 10)
+
+# For this reason you should always wrap the unquoted expression
+# with parentheses when operators are involved:
+quo(1 + (!! 1 + 2 + 3) + 10)
+
+# Or you can use the explicit unquote function:
+quo(1 + UQ(1 + 2 + 3) + 10)
+
+
+# Use !!! or UQS() if you want to add multiple arguments to a
+# function It must evaluate to a list
+args <- list(1:10, na.rm = TRUE)
+quo(mean( UQS(args) ))
+
+# You can combine the two
+var <- quote(xyz)
+extra_args <- list(trim = 0.9, na.rm = TRUE)
+quo(mean(UQ(var) , UQS(extra_args)))
+
+
+# Unquoting is especially useful for transforming successively a
+# captured expression:
+quo <- quo(foo(bar))
+quo <- quo(inner(!! quo, arg1))
+quo <- quo(outer(!! quo, !!! syms(letters[1:3])))
+quo
+
+# Since we are building the expression in the same environment, you
+# can also start with raw expressions and create a quosure in the
+# very last step to record the dynamic environment:
+expr <- expr(foo(bar))
+expr <- expr(inner(!! expr, arg1))
+quo <- quo(outer(!! expr, !!! syms(letters[1:3])))
+quo
+}
diff --git a/man/quo-predicates.Rd b/man/quo-predicates.Rd
new file mode 100644
index 0000000..868edc1
--- /dev/null
+++ b/man/quo-predicates.Rd
@@ -0,0 +1,46 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo.R
+\name{quo-predicates}
+\alias{quo-predicates}
+\alias{quo_is_missing}
+\alias{quo_is_symbol}
+\alias{quo_is_lang}
+\alias{quo_is_symbolic}
+\alias{quo_is_null}
+\title{Is a quosure quoting a symbolic, missing or NULL object?}
+\usage{
+quo_is_missing(quo)
+
+quo_is_symbol(quo)
+
+quo_is_lang(quo)
+
+quo_is_symbolic(quo)
+
+quo_is_null(quo)
+}
+\arguments{
+\item{quo}{A quosure.}
+}
+\description{
+These functions examine the expression of a quosure with a
+predicate.
+}
+\section{Empty quosures}{
+
+
+When missing arguments are captured as quosures, either through
+\code{\link[=enquo]{enquo()}} or \code{\link[=quos]{quos()}}, they are returned as an empty quosure. These
+quosures contain the \link[=missing_arg]{missing argument} and typically
+have the \link[=empty_env]{empty environment} as enclosure.
+}
+
+\examples{
+quo_is_symbol(quo(sym))
+quo_is_symbol(quo(foo(bar)))
+
+# You can create empty quosures by calling quo() without input:
+quo <- quo()
+quo_is_missing(quo)
+is_missing(f_rhs(quo))
+}
diff --git a/man/quo_expr.Rd b/man/quo_expr.Rd
new file mode 100644
index 0000000..c2bed96
--- /dev/null
+++ b/man/quo_expr.Rd
@@ -0,0 +1,57 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo.R
+\name{quo_expr}
+\alias{quo_expr}
+\alias{quo_label}
+\alias{quo_text}
+\alias{quo_name}
+\title{Splice a quosure and format it into string or label}
+\usage{
+quo_expr(quo, warn = FALSE)
+
+quo_label(quo)
+
+quo_text(quo, width = 60L, nlines = Inf)
+
+quo_name(quo)
+}
+\arguments{
+\item{quo}{A quosure or expression.}
+
+\item{warn}{Whether to warn if the quosure contains other quosures
+(those will be collapsed).}
+
+\item{width}{Width of each line.}
+
+\item{nlines}{Maximum number of lines to extract.}
+}
+\description{
+\code{quo_expr()} flattens all quosures within an expression. I.e., it
+turns \code{~foo(~bar(), ~baz)} to \code{foo(bar(), baz)}. \code{quo_text()} and
+\code{quo_label()} are equivalent to \code{\link[=f_text]{f_text()}}, \code{\link[=expr_label]{expr_label()}}, etc,
+but they first splice their argument using \code{quo_expr()}.
+\code{quo_name()} transforms a quoted symbol to a string. It adds a bit
+more intent and type checking than simply calling \code{quo_text()} on
+the quoted symbol (which will work but won't return an error if not
+a symbol).
+}
+\examples{
+quo <- quo(foo(!! quo(bar)))
+quo
+
+# quo_expr() unwraps all quosures and returns a raw expression:
+quo_expr(quo)
+
+# This is used by quo_text() and quo_label():
+quo_text(quo)
+
+# Compare to the unwrapped expression:
+expr_text(quo)
+
+# quo_name() is helpful when you need really short labels:
+quo_name(quo(sym))
+quo_name(quo(!! sym))
+}
+\seealso{
+\code{\link[=expr_label]{expr_label()}}, \code{\link[=f_label]{f_label()}}
+}
diff --git a/man/quosure.Rd b/man/quosure.Rd
new file mode 100644
index 0000000..76afd2d
--- /dev/null
+++ b/man/quosure.Rd
@@ -0,0 +1,196 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/quo.R
+\name{quosure}
+\alias{quosure}
+\alias{quo}
+\alias{new_quosure}
+\alias{enquo}
+\title{Create quosures}
+\usage{
+quo(expr)
+
+new_quosure(expr, env = caller_env())
+
+enquo(arg)
+}
+\arguments{
+\item{expr}{An expression.}
+
+\item{env}{An environment specifying the lexical enclosure of the
+quosure.}
+
+\item{arg}{A symbol referring to an argument. The expression
+supplied to that argument will be captured unevaluated.}
+}
+\value{
+A formula whose right-hand side contains the quoted
+expression supplied as argument.
+}
+\description{
+Quosures are quoted \link[=is_expr]{expressions} that keep track of an
+\link[=env]{environment} (just like \href{http://adv-r.had.co.nz/Functional-programming.html#closures}{closurefunctions}).
+They are implemented as a subclass of one-sided formulas. They are
+an essential piece of the tidy evaluation framework.
+\itemize{
+\item \code{quo()} quotes its input (i.e. captures R code without
+evaluation), captures the current environment, and bundles them
+in a quosure.
+\item \code{enquo()} takes a symbol referring to a function argument, quotes
+the R code that was supplied to this argument, captures the
+environment where the function was called (and thus where the R
+code was typed), and bundles them in a quosure.
+\item \code{\link[=quos]{quos()}} is a bit different to other functions as it returns a
+list of quosures. You can supply several expressions directly,
+e.g. \code{quos(foo, bar)}, but more importantly you can also supply
+dots: \code{quos(...)}. In the latter case, expressions forwarded
+through dots are captured and transformed to quosures. The
+environments bundled in those quosures are the ones where the
+code was supplied as arguments, even if the dots were forwarded
+multiple times across several function calls.
+\item \code{new_quosure()} is the only constructor that takes its arguments
+by value. It lets you create a quosure from an expression and an
+environment.
+}
+}
+\section{Role of quosures for tidy evaluation}{
+
+
+Quosures play an essential role thanks to these features:
+\itemize{
+\item They allow consistent scoping of quoted expressions by recording
+an expression along with its local environment.
+\item \code{quo()}, \code{quos()} and \code{enquo()} all support \link{quasiquotation}. By
+unquoting other quosures, you can safely combine expressions even
+when they come from different contexts. You can also unquote
+values and raw expressions depending on your needs.
+\item Unlike formulas, quosures self-evaluate (see \code{\link[=eval_tidy]{eval_tidy()}})
+within their own environment, which is why you can unquote a
+quosure inside another quosure and evaluate it like you've
+unquoted a raw expression.
+}
+
+See the \href{http://dplyr.tidyverse.org/articles/programming.html}{programming withdplyr}
+vignette for practical examples. For developers, the \href{http://rlang.tidyverse.org/articles/tidy-evaluation.html}{tidyevaluation}
+vignette provides an overview of this approach. The
+\link{quasiquotation} page goes in detail over the unquoting and
+splicing operators.
+}
+
+\examples{
+# quo() is a quotation function just like expr() and quote():
+expr(mean(1:10 * 2))
+quo(mean(1:10 * 2))
+
+# It supports quasiquotation and allows unquoting (evaluating
+# immediately) part of the quoted expression:
+quo(mean(!! 1:10 * 2))
+
+# What makes quo() often safer to use than quote() and expr() is
+# that it keeps track of the contextual environment. This is
+# especially important if you're referring to local variables in
+# the expression:
+var <- "foo"
+quo <- quo(var)
+quo
+
+# Here `quo` quotes `var`. Let's check that it also captures the
+# environment where that symbol is defined:
+identical(get_env(quo), get_env())
+env_has(quo, "var")
+
+
+# Keeping track of the environment is important when you quote an
+# expression in a context (that is, a particular function frame)
+# and pass it around to other functions (which will be run in their
+# own evaluation frame):
+fn <- function() {
+  foobar <- 10
+  quo(foobar * 2)
+}
+quo <- fn()
+quo
+
+# `foobar` is not defined here but was defined in `fn()`'s
+# evaluation frame. However, the quosure keeps track of that frame
+# and is safe to evaluate:
+eval_tidy(quo)
+
+
+# Like other formulas, quosures are normally self-quoting under
+# evaluation:
+eval(~var)
+eval(quo(var))
+
+# But eval_tidy() evaluates expressions in a special environment
+# (called the overscope) where they become promises. They
+# self-evaluate under evaluation:
+eval_tidy(~var)
+eval_tidy(quo(var))
+
+# Note that it's perfectly fine to unquote quosures within
+# quosures, as long as you evaluate with eval_tidy():
+quo <- quo(letters)
+quo <- quo(toupper(!! quo))
+quo
+eval_tidy(quo)
+
+
+# Quoting as a quosure is necessary to preserve scope information
+# and make sure objects are looked up in the right place. However,
+# there are situations where it can get in the way. This is the
+# case when you deal with non-tidy NSE functions that do not
+# understand formulas. You can inline the RHS of a formula in a
+# call thanks to the UQE() operator:
+nse_function <- function(arg) substitute(arg)
+var <- locally(quo(foo(bar)))
+quo(nse_function(UQ(var)))
+quo(nse_function(UQE(var)))
+
+# This is equivalent to unquoting and taking the RHS:
+quo(nse_function(!! get_expr(var)))
+
+# One of the most important old-style NSE function is the dollar
+# operator. You need to use UQE() for subsetting with dollar:
+var <- quo(cyl)
+quo(mtcars$UQE(var))
+
+# `!!`() is also treated as a shortcut. It is meant for situations
+# where the bang operator would not parse, such as subsetting with
+# $. Since that's its main purpose, we've made it a shortcut for
+# UQE() rather than UQ():
+var <- quo(cyl)
+quo(mtcars$`!!`(var))
+
+
+# When a quosure is printed in the console, the brackets indicate
+# if the enclosure is the global environment or a local one:
+locally(quo(foo))
+
+# Literals are enquosed with the empty environment because they can
+# be evaluated anywhere. The brackets indicate "empty":
+quo(10L)
+
+# To differentiate local environments, use str(). It prints the
+# machine address of the environment:
+quo1 <- locally(quo(foo))
+quo2 <- locally(quo(foo))
+quo1; quo2
+str(quo1); str(quo2)
+
+# You can also see this address by printing the environment at the
+# console:
+get_env(quo1)
+get_env(quo2)
+
+
+# new_quosure() takes by value an expression that is already quoted:
+expr <- quote(mtcars)
+env <- as_env("datasets")
+quo <- new_quosure(expr, env)
+quo
+eval_tidy(quo)
+}
+\seealso{
+\code{\link[=expr]{expr()}} for quoting a raw expression with quasiquotation.
+The \link{quasiquotation} page goes over unquoting and splicing.
+}
diff --git a/man/quosures.Rd b/man/quosures.Rd
new file mode 100644
index 0000000..5ee7801
--- /dev/null
+++ b/man/quosures.Rd
@@ -0,0 +1,89 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/dots.R, R/quos.R
+\name{dots_definitions}
+\alias{dots_definitions}
+\alias{quosures}
+\alias{quos}
+\alias{is_quosures}
+\title{Tidy quotation of multiple expressions and dots}
+\usage{
+dots_definitions(..., .named = FALSE)
+
+quos(..., .named = FALSE, .ignore_empty = c("trailing", "none", "all"))
+
+is_quosures(x)
+}
+\arguments{
+\item{...}{Expressions to capture unevaluated.}
+
+\item{.named}{Whether to ensure all dots are named. Unnamed
+elements are processed with \code{\link[=expr_text]{expr_text()}} to figure out a default
+name. If an integer, it is passed to the \code{width} argument of
+\code{expr_text()}, if \code{TRUE}, the default width is used. See
+\code{\link[=exprs_auto_name]{exprs_auto_name()}}.}
+
+\item{.ignore_empty}{Whether to ignore empty arguments. Can be one
+of \code{"trailing"}, \code{"none"}, \code{"all"}. If \code{"trailing"}, only the
+last argument is ignored if it is empty.}
+
+\item{x}{An object to test.}
+}
+\description{
+\code{quos()} quotes its arguments and returns them as a list of
+quosures (see \code{\link[=quo]{quo()}}). It is especially useful to capture
+arguments forwarded through \code{...}.
+}
+\details{
+Both \code{quos} and \code{dots_definitions()} have specific support for
+definition expressions of the type \code{var := expr}, with some
+differences:
+
+\describe{
+\item{\code{quos()}}{
+When \code{:=} definitions are supplied to \code{quos()}, they are treated
+as a synonym of argument assignment \code{=}. On the other hand, they
+allow unquoting operators on the left-hand side, which makes it
+easy to assign names programmatically.}
+\item{\code{dots_definitions()}}{
+This dots capturing function returns definitions as is. Unquote
+operators are processed on capture, in both the LHS and the
+RHS. Unlike \code{quos()}, it allows named definitions.}
+}
+}
+\examples{
+# quos() is like the singular version but allows quoting
+# several arguments:
+quos(foo(), bar(baz), letters[1:2], !! letters[1:2])
+
+# It is most useful when used with dots. This allows quoting
+# expressions across different levels of function calls:
+fn <- function(...) quos(...)
+fn(foo(bar), baz)
+
+# Note that quos() does not check for duplicate named
+# arguments:
+fn <- function(...) quos(x = x, ...)
+fn(x = a + b)
+
+
+# Dots can be spliced in:
+args <- list(x = 1:3, y = ~var)
+quos(!!! args, z = 10L)
+
+# Raw expressions are turned to formulas:
+args <- alist(x = foo, y = bar)
+quos(!!! args)
+
+
+# Definitions are treated similarly to named arguments:
+quos(x := expr, y = expr)
+
+# However, the LHS of definitions can be unquoted. The return value
+# must be a symbol or a string:
+var <- "foo"
+quos(!!var := expr)
+
+# If you need the full LHS expression, use dots_definitions():
+dots <- dots_definitions(var = foo(baz) := bar(baz))
+dots$defs
+}
diff --git a/man/restarting.Rd b/man/restarting.Rd
new file mode 100644
index 0000000..e542505
--- /dev/null
+++ b/man/restarting.Rd
@@ -0,0 +1,69 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-handlers.R
+\name{restarting}
+\alias{restarting}
+\title{Create a restarting handler}
+\usage{
+restarting(.restart, ..., .fields = NULL)
+}
+\arguments{
+\item{.restart}{The name of a restart.}
+
+\item{...}{Additional arguments passed on the restart
+function. These arguments are evaluated only once and
+immediately, when creating the restarting handler. Furthermore,
+they are evaluated with \link[=dots_list]{explicit splicing}.}
+
+\item{.fields}{A character vector specifying the fields of the
+condition that should be passed as arguments to the restart. If
+named, the names (except empty names \code{""}) are used as
+argument names for calling the restart function. Otherwise the
+the fields themselves are used as argument names.}
+}
+\description{
+This constructor automates the common task of creating an
+\code{\link[=inplace]{inplace()}} handler that invokes a restart.
+}
+\details{
+Jumping to a restart point from an inplace handler has two
+effects. First, the control flow jumps to wherever the restart was
+established, and the restart function is called (with \code{...}, or
+\code{.fields} as arguments). Execution resumes from the
+\code{\link[=with_restarts]{with_restarts()}} call. Secondly, the transfer of the control flow
+out of the function that signalled the condition means that the
+handler has dealt with the condition. Thus the condition will not
+be passed on to other potential handlers established on the stack.
+}
+\examples{
+# This is a restart that takes a data frame and names as arguments
+rst_bar <- function(df, nms) {
+  stats::setNames(df, nms)
+}
+
+# This restart is simpler and does not take arguments
+rst_baz <- function() "baz"
+
+# Signalling a condition parameterised with a data frame
+fn <- function() {
+  with_restarts(cnd_signal("foo", foo_field = mtcars),
+    rst_bar = rst_bar,
+    rst_baz = rst_baz
+  )
+}
+
+# Creating a restarting handler that passes arguments `nms` and
+# `df`, the latter taken from a data field of the condition object
+restart_bar <- restarting("rst_bar",
+  nms = LETTERS[1:11], .fields = c(df = "foo_field")
+)
+
+# The restarting handlers jumps to `rst_bar` when `foo` is signalled:
+with_handlers(fn(), foo = restart_bar)
+
+# The restarting() constructor is especially nice to use with
+# restarts that do not need arguments:
+with_handlers(fn(), foo = restarting("rst_baz"))
+}
+\seealso{
+\code{\link[=inplace]{inplace()}} and \code{\link[=exiting]{exiting()}}.
+}
diff --git a/man/return_from.Rd b/man/return_from.Rd
new file mode 100644
index 0000000..d32ad92
--- /dev/null
+++ b/man/return_from.Rd
@@ -0,0 +1,54 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{return_from}
+\alias{return_from}
+\alias{return_to}
+\title{Jump to or from a frame}
+\usage{
+return_from(frame, value = NULL)
+
+return_to(frame, value = NULL)
+}
+\arguments{
+\item{frame}{An environment, a frame object, or any object with an
+\code{\link[=get_env]{get_env()}} method. The environment should be an evaluation
+environment currently on the stack.}
+
+\item{value}{The return value.}
+}
+\description{
+While \code{\link[base:return]{base::return()}} can only return from the current local
+frame, these two functions will return from any frame on the
+current evaluation stack, between the global and the currently
+active context. They provide a way of performing arbitrary
+non-local jumps out of the function currently under evaluation.
+}
+\details{
+\code{return_from()} will jump out of \code{frame}. \code{return_to()} is a bit
+trickier. It will jump out of the frame located just before \code{frame}
+in the evaluation stack, so that control flow ends up in \code{frame},
+at the location where the previous frame was called from.
+
+These functions should only be used rarely. These sort of non-local
+gotos can be hard to reason about in casual code, though they can
+sometimes be useful. Also, consider to use the condition system to
+perform non-local jumps.
+}
+\examples{
+# Passing fn() evaluation frame to g():
+fn <- function() {
+  val <- g(get_env())
+  cat("g returned:", val, "\\n")
+  "normal return"
+}
+g <- function(env) h(env)
+
+# Here we return from fn() with a new return value:
+h <- function(env) return_from(env, "early return")
+fn()
+
+# Here we return to fn(). The call stack unwinds until the last frame
+# called by fn(), which is g() in that case.
+h <- function(env) return_to(env, "early return")
+fn()
+}
diff --git a/man/rst_abort.Rd b/man/rst_abort.Rd
new file mode 100644
index 0000000..fe36717
--- /dev/null
+++ b/man/rst_abort.Rd
@@ -0,0 +1,57 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-restarts.R
+\name{rst_abort}
+\alias{rst_abort}
+\title{Jump to the abort restart}
+\usage{
+rst_abort()
+}
+\description{
+The abort restart is the only restart that is established at top
+level. It is used by R as a top-level target, most notably when an
+error is issued (see \code{\link[=abort]{abort()}}) that no handler is able
+to deal with (see \code{\link[=with_handlers]{with_handlers()}}).
+}
+\examples{
+# The `abort` restart is a bit special in that it is always
+# registered in a R session. You will always find it on the restart
+# stack because it is established at top level:
+rst_list()
+
+# You can use the `above` restart to jump to top level without
+# signalling an error:
+\dontrun{
+fn <- function() {
+  cat("aborting...\\n")
+  rst_abort()
+  cat("This is never called\\n")
+}
+{
+  fn()
+  cat("This is never called\\n")
+}
+}
+
+# The `above` restart is the target that R uses to jump to top
+# level when critical errors are signalled:
+\dontrun{
+{
+  abort("error")
+  cat("This is never called\\n")
+}
+}
+
+# If another `abort` restart is specified, errors are signalled as
+# usual but then control flow resumes with from the new restart:
+\dontrun{
+out <- NULL
+{
+  out <- with_restarts(abort("error"), abort = function() "restart!")
+  cat("This is called\\n")
+}
+cat("`out` has now become:", out, "\\n")
+}
+}
+\seealso{
+\code{\link[=rst_jump]{rst_jump()}}, \code{\link[=abort]{abort()}} and \code{\link[=cnd_abort]{cnd_abort()}}.
+}
diff --git a/man/rst_list.Rd b/man/rst_list.Rd
new file mode 100644
index 0000000..a9b38fd
--- /dev/null
+++ b/man/rst_list.Rd
@@ -0,0 +1,35 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-restarts.R
+\name{rst_list}
+\alias{rst_list}
+\alias{rst_exists}
+\alias{rst_jump}
+\alias{rst_maybe_jump}
+\title{Restarts utilities}
+\usage{
+rst_list()
+
+rst_exists(.restart)
+
+rst_jump(.restart, ...)
+
+rst_maybe_jump(.restart, ...)
+}
+\arguments{
+\item{.restart}{The name of a restart.}
+
+\item{...}{Arguments passed on to the restart function. These
+dots are evaluated with \link[=dots_list]{explicit splicing}.}
+}
+\description{
+Restarts are named jumping points established by \code{\link[=with_restarts]{with_restarts()}}.
+\code{rst_list()} returns the names of all restarts currently
+established. \code{rst_exists()} checks if a given restart is
+established. \code{rst_jump()} stops execution of the current function
+and jumps to a restart point. If the restart does not exist, an
+error is thrown.  \code{rst_maybe_jump()} first checks that a restart
+exists before jumping.
+}
+\seealso{
+\code{\link[=with_restarts]{with_restarts()}}, \code{\link[=rst_muffle]{rst_muffle()}}.
+}
diff --git a/man/rst_muffle.Rd b/man/rst_muffle.Rd
new file mode 100644
index 0000000..d04bab8
--- /dev/null
+++ b/man/rst_muffle.Rd
@@ -0,0 +1,72 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-restarts.R
+\name{rst_muffle}
+\alias{rst_muffle}
+\title{Jump to a muffling restart}
+\usage{
+rst_muffle(c)
+}
+\arguments{
+\item{c}{A condition to muffle.}
+}
+\description{
+Muffle restarts are established at the same location as where a
+condition is signalled. They are useful for two non-exclusive
+purposes: muffling signalling functions and muffling conditions. In
+the first case, \code{rst_muffle()} prevents any further side effects of
+a signalling function (a warning or message from being displayed,
+an aborting jump to top level, etc). In the second case, the
+muffling jump prevents a condition from being passed on to other
+handlers. In both cases, execution resumes normally from the point
+where the condition was signalled.
+}
+\examples{
+side_effect <- function() cat("side effect!\\n")
+handler <- inplace(function(c) side_effect())
+
+# A muffling handler is an inplace handler that jumps to a muffle
+# restart:
+muffling_handler <- inplace(function(c) {
+  side_effect()
+  rst_muffle(c)
+})
+
+# You can also create a muffling handler simply by setting
+# muffle = TRUE:
+muffling_handler <- inplace(function(c) side_effect(), muffle = TRUE)
+
+# You can then muffle the signalling function:
+fn <- function(signal, msg) {
+  signal(msg)
+  "normal return value"
+}
+with_handlers(fn(message, "some message"), message = handler)
+with_handlers(fn(message, "some message"), message = muffling_handler)
+with_handlers(fn(warning, "some warning"), warning = muffling_handler)
+
+# Note that exiting handlers are thrown to the establishing point
+# before being executed. At that point, the restart (established
+# within the signalling function) does not exist anymore:
+\dontrun{
+with_handlers(fn(warning, "some warning"),
+  warning = exiting(function(c) rst_muffle(c)))
+}
+
+
+# Another use case for muffle restarts is to muffle conditions
+# themselves. That is, to prevent other condition handlers from
+# being called:
+undesirable_handler <- inplace(function(c) cat("please don't call me\\n"))
+
+with_handlers(foo = undesirable_handler,
+  with_handlers(foo = muffling_handler, {
+    cnd_signal("foo", mufflable = TRUE)
+    "return value"
+  }))
+
+# See the `mufflable` argument of cnd_signal() for more on this point
+}
+\seealso{
+The \code{muffle} argument of \code{\link[=inplace]{inplace()}}, and the \code{mufflable}
+argument of \code{\link[=cnd_signal]{cnd_signal()}}.
+}
diff --git a/man/scalar-type-predicates.Rd b/man/scalar-type-predicates.Rd
new file mode 100644
index 0000000..18511f4
--- /dev/null
+++ b/man/scalar-type-predicates.Rd
@@ -0,0 +1,49 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{scalar-type-predicates}
+\alias{scalar-type-predicates}
+\alias{is_scalar_list}
+\alias{is_scalar_atomic}
+\alias{is_scalar_vector}
+\alias{is_scalar_integer}
+\alias{is_scalar_double}
+\alias{is_scalar_character}
+\alias{is_scalar_logical}
+\alias{is_scalar_raw}
+\alias{is_string}
+\alias{is_scalar_bytes}
+\title{Scalar type predicates}
+\usage{
+is_scalar_list(x)
+
+is_scalar_atomic(x)
+
+is_scalar_vector(x)
+
+is_scalar_integer(x)
+
+is_scalar_double(x)
+
+is_scalar_character(x, encoding = NULL)
+
+is_scalar_logical(x)
+
+is_scalar_raw(x)
+
+is_string(x, encoding = NULL)
+
+is_scalar_bytes(x)
+}
+\arguments{
+\item{x}{object to be tested.}
+
+\item{encoding}{Expected encoding of a string or character
+vector. One of \code{UTF-8}, \code{latin1}, or \code{unknown}.}
+}
+\description{
+These predicates check for a given type and whether the vector is
+"scalar", that is, of length 1.
+}
+\seealso{
+\link{type-predicates}, \link{bare-type-predicates}
+}
diff --git a/man/scoped_env.Rd b/man/scoped_env.Rd
new file mode 100644
index 0000000..ed9095c
--- /dev/null
+++ b/man/scoped_env.Rd
@@ -0,0 +1,98 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/env.R
+\name{scoped_env}
+\alias{scoped_env}
+\alias{pkg_env}
+\alias{pkg_env_name}
+\alias{scoped_names}
+\alias{scoped_envs}
+\alias{is_scoped}
+\alias{base_env}
+\alias{global_env}
+\title{Scoped environments}
+\usage{
+scoped_env(nm)
+
+pkg_env(pkg)
+
+pkg_env_name(pkg)
+
+scoped_names()
+
+scoped_envs()
+
+is_scoped(nm)
+
+base_env()
+
+global_env()
+}
+\arguments{
+\item{nm}{The name of an environment attached to the search
+path. Call \code{\link[base:search]{base::search()}} to see what is currently on the path.}
+
+\item{pkg}{The name of a package.}
+}
+\description{
+Scoped environments are named environments which form a
+parent-child hierarchy called the search path. They define what
+objects you can see (are in scope) from your workspace. They
+typically are package environments, i.e. special environments
+containing all exported functions from a package (and whose parent
+environment is the package namespace, which also contains
+unexported functions). Package environments are attached to the
+search path with \code{\link[base:library]{base::library()}}. Note however that any
+environment can be attached to the search path, for example with
+the unrecommended \code{\link[base:attach]{base::attach()}} base function which transforms
+vectors to scoped environments.
+\itemize{
+\item You can list all scoped environments with \code{scoped_names()}. Unlike
+\code{\link[base:search]{base::search()}}, it also mentions the empty environment that
+terminates the search path (it is given the name \code{"NULL"}).
+\item \code{scoped_envs()} returns all environments on the search path,
+including the empty environment.
+\item \code{pkg_env()} takes a package name and returns the scoped
+environment of packages if they are attached to the search path,
+and throws an error otherwise.
+\item \code{is_scoped()} allows you to check whether a named environment is
+on the search path.
+}
+}
+\section{Search path}{
+
+
+The search path is a chain of scoped environments where newly
+attached environments are the childs of earlier ones. However, the
+global environment, where everything you define at top-level ends
+up, is pinned as the head of that linked chain. Likewise, the base
+package environment is pinned as the tail of the chain. You can
+retrieve those environments with \code{global_env()} and \code{base_env()}
+respectively. The global environment is also the environment of the
+very first evaluation frame on the stack, see \code{\link[=global_frame]{global_frame()}} and
+\code{\link[=ctxt_stack]{ctxt_stack()}}.
+}
+
+\examples{
+# List the names of scoped environments:
+nms <- scoped_names()
+nms
+
+# The global environment is always the first in the chain:
+scoped_env(nms[[1]])
+
+# And the scoped environment of the base package is always the last:
+scoped_env(nms[[length(nms)]])
+
+# These two environments have their own shortcuts:
+global_env()
+base_env()
+
+# Packages appear in the search path with a special name. Use
+# pkg_env_name() to create that name:
+pkg_env_name("rlang")
+scoped_env(pkg_env_name("rlang"))
+
+# Alternatively, get the scoped environment of a package with
+# pkg_env():
+pkg_env("utils")
+}
diff --git a/man/seq2.Rd b/man/seq2.Rd
new file mode 100644
index 0000000..784ee58
--- /dev/null
+++ b/man/seq2.Rd
@@ -0,0 +1,36 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-utils.R
+\name{seq2}
+\alias{seq2}
+\alias{seq2_along}
+\title{Increasing sequence of integers in an interval}
+\usage{
+seq2(from, to)
+
+seq2_along(from, x)
+}
+\arguments{
+\item{from}{The starting point of the sequence.}
+
+\item{to}{The end point.}
+
+\item{x}{A vector whose length is the end point.}
+}
+\value{
+An integer vector containing a strictly increasing
+sequence.
+}
+\description{
+These helpers take two endpoints and return the sequence of all
+integers within that interval. For \code{seq2_along()}, the upper
+endpoint is taken from the length of a vector. Unlike
+\code{base::seq()}, they return an empty vector if the starting point is
+a larger integer than the end point.
+}
+\examples{
+seq2(2, 10)
+seq2(10, 2)
+seq(10, 2)
+
+seq2_along(10, letters)
+}
diff --git a/man/set_attrs.Rd b/man/set_attrs.Rd
new file mode 100644
index 0000000..8ac9f36
--- /dev/null
+++ b/man/set_attrs.Rd
@@ -0,0 +1,52 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/attr.R
+\name{set_attrs}
+\alias{set_attrs}
+\alias{mut_attrs}
+\title{Add attributes to an object}
+\usage{
+set_attrs(.x, ...)
+
+mut_attrs(.x, ...)
+}
+\arguments{
+\item{.x}{An object to decorate with attributes.}
+
+\item{...}{A list of named attributes. These have \link[=dots_list]{explicit
+splicing semantics}. Pass a single unnamed \code{NULL} to
+zap all attributes from \code{.x}.}
+}
+\value{
+\code{set_attrs()} returns a modified \link[=duplicate]{shallow copy}
+of \code{.x}. \code{mut_attrs()} invisibly returns the original \code{.x}
+modified in place.
+}
+\description{
+\code{set_attrs()} adds, changes, or zaps attributes of objects. Pass a
+single unnamed \code{NULL} as argument to zap all attributes. For
+\link[=is_copyable]{uncopyable} types, use \code{mut_attrs()}.
+}
+\details{
+Unlike \code{\link[=structure]{structure()}}, these setters have no special handling of
+internal attributes names like \code{.Dim}, \code{.Dimnames} or \code{.Names}.
+}
+\examples{
+set_attrs(letters, names = 1:26, class = "my_chr")
+
+# Splice a list of attributes:
+attrs <- list(attr = "attr", names = 1:26, class = "my_chr")
+obj <- set_attrs(letters, splice(attrs))
+obj
+
+# Zap attributes by passing a single unnamed NULL argument:
+set_attrs(obj, NULL)
+set_attrs(obj, !!! list(NULL))
+
+# Note that set_attrs() never modifies objects in place:
+obj
+
+# For uncopyable types, mut_attrs() lets you modify in place:
+env <- env()
+mut_attrs(env, foo = "bar")
+env
+}
diff --git a/man/set_chr_encoding.Rd b/man/set_chr_encoding.Rd
new file mode 100644
index 0000000..cb98d0b
--- /dev/null
+++ b/man/set_chr_encoding.Rd
@@ -0,0 +1,77 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-chr.R
+\name{set_chr_encoding}
+\alias{set_chr_encoding}
+\alias{chr_encoding}
+\alias{set_str_encoding}
+\alias{str_encoding}
+\title{Set encoding of a string or character vector}
+\usage{
+set_chr_encoding(x, encoding = c("unknown", "UTF-8", "latin1", "bytes"))
+
+chr_encoding(x)
+
+set_str_encoding(x, encoding = c("unknown", "UTF-8", "latin1", "bytes"))
+
+str_encoding(x)
+}
+\arguments{
+\item{x}{A string or character vector.}
+
+\item{encoding}{Either an encoding specially handled by R
+(\code{"UTF-8"} or \code{"latin1"}), \code{"bytes"} to inhibit all encoding
+conversions, or \code{"unknown"} if the string should be treated as
+encoded in the current locale codeset.}
+}
+\description{
+R has specific support for UTF-8 and latin1 encoded strings. This
+mostly matters for internal conversions. Thanks to this support,
+you can reencode strings to UTF-8 or latin1 for internal
+processing, and return these strings without having to convert them
+back to the native encoding. However, it is important to make sure
+the encoding mark has not been lost in the process, otherwise the
+output will be treated as if encoded according to the current
+locale (see \code{\link[=mut_utf8_locale]{mut_utf8_locale()}} for documentation about locale
+codesets), which is not appropriate if it does not coincide with
+the actual encoding. In those situations, you can use these
+functions to ensure an encoding mark in your strings.
+}
+\examples{
+# Encoding marks are always ignored on ASCII strings:
+str_encoding(set_str_encoding("cafe", "UTF-8"))
+
+# You can specify the encoding of strings containing non-ASCII
+# characters:
+cafe <- string(c(0x63, 0x61, 0x66, 0xC3, 0xE9))
+str_encoding(cafe)
+str_encoding(set_str_encoding(cafe, "UTF-8"))
+
+
+# It is important to consistently mark the encoding of strings
+# because R and other packages perform internal string conversions
+# all the time. Here is an example with the names attribute:
+latin1 <- string(c(0x63, 0x61, 0x66, 0xE9), "latin1")
+latin1 <- set_names(latin1)
+
+# The names attribute is encoded in latin1 as we would expect:
+str_encoding(names(latin1))
+
+# However the names are converted to UTF-8 by the c() function:
+str_encoding(names(c(latin1)))
+as_bytes(names(c(latin1)))
+
+# Bad things happen when the encoding marker is lost and R performs
+# a conversion. R will assume that the string is encoded according
+# to the current locale:
+\dontrun{
+bad <- set_names(set_str_encoding(latin1, "unknown"))
+mut_utf8_locale()
+
+str_encoding(names(c(bad)))
+as_bytes(names(c(bad)))
+}
+}
+\seealso{
+\code{\link[=mut_utf8_locale]{mut_utf8_locale()}} about the effects of the locale, and
+\code{\link[=as_utf8_string]{as_utf8_string()}} about encoding conversion.
+}
diff --git a/man/set_expr.Rd b/man/set_expr.Rd
new file mode 100644
index 0000000..875ea92
--- /dev/null
+++ b/man/set_expr.Rd
@@ -0,0 +1,45 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr.R
+\name{set_expr}
+\alias{set_expr}
+\alias{get_expr}
+\title{Set and get an expression}
+\usage{
+set_expr(x, value)
+
+get_expr(x, default = x)
+}
+\arguments{
+\item{x}{An expression or one-sided formula. In addition,
+\code{set_expr()} accept frames.}
+
+\item{value}{An updated expression.}
+
+\item{default}{A default expression to return when \code{x} is not an
+expression wrapper. Defaults to \code{x} itself.}
+}
+\value{
+The updated original input for \code{set_expr()}. A raw
+expression for \code{get_expr()}.
+}
+\description{
+These helpers are useful to make your function work generically
+with quosures and raw expressions. First call \code{get_expr()} to
+extract an expression. Once you're done processing the expression,
+call \code{set_expr()} on the original object to update the expression.
+You can return the result of \code{set_expr()}, either a formula or an
+expression depending on the input type. Note that \code{set_expr()} does
+not change its input, it creates a new object.
+}
+\examples{
+f <- ~foo(bar)
+e <- quote(foo(bar))
+frame <- identity(identity(ctxt_frame()))
+
+get_expr(f)
+get_expr(e)
+get_expr(frame)
+
+set_expr(f, quote(baz))
+set_expr(e, quote(baz))
+}
diff --git a/man/set_names.Rd b/man/set_names.Rd
new file mode 100644
index 0000000..a8ac7c9
--- /dev/null
+++ b/man/set_names.Rd
@@ -0,0 +1,44 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/attr.R
+\name{set_names}
+\alias{set_names}
+\title{Set names of a vector}
+\usage{
+set_names(x, nm = x, ...)
+}
+\arguments{
+\item{x}{Vector to name.}
+
+\item{nm, ...}{Vector of names, the same length as \code{x}.
+
+You can specify names in the following ways:
+\itemize{
+\item If you do nothing, \code{x} will be named with itself.
+\item If \code{x} already has names, you can provide a function or formula
+to transform the existing names. In that case, \code{...} is passed
+to the function.
+\item If \code{nm} is \code{NULL}, the names are removed (if present).
+\item In all other cases, \code{nm} and \code{...} are passed to \code{\link[=chr]{chr()}}. This
+gives implicit splicing semantics: you can pass character
+vectors or list of character vectors indistinctly.
+}}
+}
+\description{
+This is equivalent to \code{\link[stats:setNames]{stats::setNames()}}, with more features and
+stricter argument checking.
+}
+\examples{
+set_names(1:4, c("a", "b", "c", "d"))
+set_names(1:4, letters[1:4])
+set_names(1:4, "a", "b", "c", "d")
+
+# If the second argument is ommitted a vector is named with itself
+set_names(letters[1:5])
+
+# Alternatively you can supply a function
+set_names(1:10, ~ letters[seq_along(.)])
+set_names(head(mtcars), toupper)
+
+# `...` is passed to the function:
+set_names(head(mtcars), paste0, "_foo")
+}
diff --git a/man/splice.Rd b/man/splice.Rd
new file mode 100644
index 0000000..337cb4f
--- /dev/null
+++ b/man/splice.Rd
@@ -0,0 +1,52 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-squash.R
+\name{splice}
+\alias{splice}
+\alias{is_spliced}
+\alias{is_spliced_bare}
+\title{Splice a list within a vector}
+\usage{
+splice(x)
+
+is_spliced(x)
+
+is_spliced_bare(x)
+}
+\arguments{
+\item{x}{A list to splice.}
+}
+\description{
+This adjective signals to functions taking dots that \code{x} should be
+spliced in a surrounding vector. Examples of functions that support
+such explicit splicing are \code{\link[=ll]{ll()}}, \code{\link[=chr]{chr()}}, etc. Generally, any
+functions taking dots with \code{\link[=dots_list]{dots_list()}} or \code{\link[=dots_splice]{dots_splice()}}
+supports splicing.
+}
+\details{
+Note that all functions supporting dots splicing also support the
+syntactic operator \code{!!!}. For tidy capture and tidy evaluation,
+this operator directly manipulates the calls (see \code{\link[=quo]{quo()}} and
+\link{quasiquotation}). However manipulating the call is not appropriate
+when taking dots by value rather than by expression, because it is
+slow and the dots might contain large lists of data. For this
+reason we splice values rather than expressions when dots are not
+captured by expression. We do it in two steps: first mark the
+objects to be spliced, then splice the objects with \code{\link[=flatten]{flatten()}}.
+}
+\examples{
+x <- list("a")
+
+# It makes sense for ll() to accept lists literally, so it doesn't
+# automatically splice them:
+ll(x)
+
+# But you can splice lists explicitly:
+y <- splice(x)
+ll(y)
+
+# Or with the syntactic shortcut:
+ll(!!! x)
+}
+\seealso{
+\link{vector-construction}
+}
diff --git a/man/stack.Rd b/man/stack.Rd
new file mode 100644
index 0000000..f2f1a93
--- /dev/null
+++ b/man/stack.Rd
@@ -0,0 +1,147 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{stack}
+\alias{stack}
+\alias{global_frame}
+\alias{current_frame}
+\alias{ctxt_frame}
+\alias{call_frame}
+\alias{ctxt_depth}
+\alias{call_depth}
+\alias{ctxt_stack}
+\alias{call_stack}
+\title{Call stack information}
+\usage{
+global_frame()
+
+current_frame()
+
+ctxt_frame(n = 1)
+
+call_frame(n = 1, clean = TRUE)
+
+ctxt_depth()
+
+call_depth()
+
+ctxt_stack(n = NULL, trim = 0)
+
+call_stack(n = NULL, clean = TRUE)
+}
+\arguments{
+\item{n}{The number of frames to go back in the stack.}
+
+\item{clean}{Whether to post-process the call stack to clean
+non-standard frames. If \code{TRUE}, suboptimal call-stack entries by
+\code{\link[base:eval]{base::eval()}} will be cleaned up: the duplicate frame created by
+\code{eval()} is eliminated.}
+
+\item{trim}{The number of layers of intervening frames to trim off
+the stack. See \code{\link[=stack_trim]{stack_trim()}} and examples.}
+}
+\description{
+The \code{eval_} and \code{call_} families of functions provide a replacement
+for the base R functions prefixed with \code{sys.} (which are all about
+the context stack), as well as for \code{\link[=parent.frame]{parent.frame()}} (which is the
+only base R function for querying the call stack). The context
+stack includes all R-level evaluation contexts. It is linear in
+terms of execution history but due to lazy evaluation it is
+potentially nonlinear in terms of call history. The call stack
+history, on the other hand, is homogenous.
+}
+\details{
+\code{ctxt_frame()} and \code{call_frame()} return a \code{frame} object
+containing the following fields: \code{expr} and \code{env} (call expression
+and evaluation environment), \code{pos} and \code{caller_pos} (position of
+current frame in the context stack and position of the caller), and
+\code{fun} (function of the current frame). \code{ctxt_stack()} and
+\code{call_stack()} return a list of all context or call frames on the
+stack. Finally, \code{ctxt_depth()} and \code{call_depth()} report the
+current context position or the number of calling frames on the
+stack.
+
+The base R functions take two sorts of arguments to indicate which
+frame to query: \code{which} and \code{n}. The \code{n} argument is
+straightforward: it's the number of frames to go down the stack,
+with \code{n = 1} referring to the current context. The \code{which} argument
+is more complicated and changes meaning for values lower than
+\enumerate{
+\item For the sake of consistency, the lazyeval functions all take the
+same kind of argument \code{n}. This argument has a single meaning (the
+number of frames to go down the stack) and cannot be lower than 1.
+}
+
+Note finally that \code{parent.frame(1)} corresponds to
+\code{call_frame(2)$env}, as \code{n = 1} always refers to the current
+frame. This makes the \code{_frame()} and \code{_stack()} functions
+consistent: \code{ctxt_frame(2)} is the same as \code{ctxt_stack()[[2]]}.
+Also, \code{ctxt_depth()} returns one more frame than
+[base::sys.nframe()] because it counts the global frame. That is
+consistent with the \code{_stack()} functions which return the global
+frame as well. This way, \code{call_stack(call_depth())} is the same as
+\code{global_frame()}.
+
+[[2]: R:[2
+[base::sys.nframe()]: R:base::sys.nframe()
+}
+\examples{
+# Expressions within arguments count as contexts
+identity(identity(ctxt_depth())) # returns 2
+
+# But they are not part of the call stack because arguments are
+# evaluated within the calling function (or the global environment
+# if called at top level)
+identity(identity(call_depth())) # returns 0
+
+# The context stacks includes all intervening execution frames. The
+# call stack doesn't:
+f <- function(x) identity(x)
+f(f(ctxt_stack()))
+f(f(call_stack()))
+
+g <- function(cmd) cmd()
+f(g(ctxt_stack))
+f(g(call_stack))
+
+# The lazyeval _stack() functions return a list of frame
+# objects. Use purrr::transpose() or index a field with
+# purrr::map()'s to extract a particular field from a stack:
+
+# stack <- f(f(call_stack()))
+# purrr::map(stack, "env")
+# purrr::transpose(stack)$expr
+
+# current_frame() is an alias for ctxt_frame(1)
+fn <- function() list(current = current_frame(), first = ctxt_frame(1))
+fn()
+
+# While current_frame() is the top of the stack, global_frame() is
+# the bottom:
+fn <- function() {
+  n <- ctxt_depth()
+  ctxt_frame(n)
+}
+identical(fn(), global_frame())
+
+
+# ctxt_stack() returns a stack with all intervening frames. You can
+# trim layers of intervening frames with the trim argument:
+identity(identity(ctxt_stack()))
+identity(identity(ctxt_stack(trim = 1)))
+
+# ctxt_stack() is called within fn() with intervening frames:
+fn <- function(trim) identity(identity(ctxt_stack(trim = trim)))
+fn(0)
+
+# We can trim the first layer of those:
+fn(1)
+
+# The outside intervening frames (at the fn() call site) are still
+# returned, but can be trimmed as well:
+identity(identity(fn(1)))
+identity(identity(fn(2)))
+
+g <- function(trim) identity(identity(fn(trim)))
+g(2)
+g(3)
+}
diff --git a/man/stack_trim.Rd b/man/stack_trim.Rd
new file mode 100644
index 0000000..1678d0d
--- /dev/null
+++ b/man/stack_trim.Rd
@@ -0,0 +1,51 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/stack.R
+\name{stack_trim}
+\alias{stack_trim}
+\title{Trim top call layers from the evaluation stack}
+\usage{
+stack_trim(stack, n = 1)
+}
+\arguments{
+\item{stack}{An evaluation stack.}
+
+\item{n}{The number of call frames (not eval frames) to trim off
+the top of the stack. In other words, the number of layers of
+intervening frames to trim.}
+}
+\description{
+\code{\link[=ctxt_stack]{ctxt_stack()}} can be tricky to use in real code because all
+intervening frames are returned with the stack, including those at
+\code{ctxt_stack()} own call site. \code{stack_trim()} makes it easy to
+remove layers of intervening calls.
+}
+\examples{
+# Intervening frames appear on the evaluation stack:
+identity(identity(ctxt_stack()))
+
+# stack_trim() will trim the first n layers of calls:
+stack_trim(identity(identity(ctxt_stack())))
+
+# Note that it also takes care of calls intervening at its own call
+# site:
+identity(identity(
+  stack_trim(identity(identity(ctxt_stack())))
+))
+
+# It is especially useful when used within a function that needs to
+# inspect the evaluation stack but should nonetheless be callable
+# within nested calls without side effects:
+stack_util <- function() {
+  # n = 2 means that two layers of intervening calls should be
+  # removed: The layer at ctxt_stack()'s call site (including the
+  # stack_trim() call), and the layer at stack_util()'s call.
+  stack <- stack_trim(ctxt_stack(), n = 2)
+  stack
+}
+user_fn <- function() {
+  # A user calls your stack utility with intervening frames:
+  identity(identity(stack_util()))
+}
+# These intervening frames won't appear in the evaluation stack
+identity(user_fn())
+}
diff --git a/man/string.Rd b/man/string.Rd
new file mode 100644
index 0000000..9103cc7
--- /dev/null
+++ b/man/string.Rd
@@ -0,0 +1,44 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-chr.R
+\name{string}
+\alias{string}
+\title{Create a string}
+\usage{
+string(x, encoding = NULL)
+}
+\arguments{
+\item{x}{A character vector or a vector or list of string-like
+objects.}
+
+\item{encoding}{If non-null, passed to \code{\link[=set_chr_encoding]{set_chr_encoding()}} to add
+an encoding mark. This is only declarative, no encoding
+conversion is performed.}
+}
+\description{
+These base-type constructors allow more control over the creation
+of strings in R. They take character vectors or string-like objects
+(integerish or raw vectors), and optionally set the encoding. The
+string version checks that the input contains a scalar string.
+}
+\examples{
+# As everywhere in R, you can specify a string with Unicode
+# escapes. The characters corresponding to Unicode codepoints will
+# be encoded in UTF-8, and the string will be marked as UTF-8
+# automatically:
+cafe <- string("caf\\uE9")
+str_encoding(cafe)
+as_bytes(cafe)
+
+# In addition, string() provides useful conversions to let
+# programmers control how the string is represented in memory. For
+# encodings other than UTF-8, you'll need to supply the bytes in
+# hexadecimal form. If it is a latin1 encoding, you can mark the
+# string explicitly:
+cafe_latin1 <- string(c(0x63, 0x61, 0x66, 0xE9), "latin1")
+str_encoding(cafe_latin1)
+as_bytes(cafe_latin1)
+}
+\seealso{
+\code{set_chr_encoding()} for more information
+about encodings in R.
+}
diff --git a/man/switch_lang.Rd b/man/switch_lang.Rd
new file mode 100644
index 0000000..58bd968
--- /dev/null
+++ b/man/switch_lang.Rd
@@ -0,0 +1,84 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{switch_lang}
+\alias{switch_lang}
+\alias{coerce_lang}
+\alias{lang_type_of}
+\title{Dispatch on call type}
+\usage{
+switch_lang(.x, ...)
+
+coerce_lang(.x, .to, ...)
+
+lang_type_of(x)
+}
+\arguments{
+\item{.x, x}{A language object (a call). If a formula quote, the RHS
+is extracted first.}
+
+\item{...}{Named clauses. The names should be types as returned by
+\code{lang_type_of()}.}
+
+\item{.to}{This is useful when you switchpatch within a coercing
+function. If supplied, this should be a string indicating the
+target type. A catch-all clause is then added to signal an error
+stating the conversion failure. This type is prettified unless
+\code{.to} inherits from the S3 class \code{"AsIs"} (see \code{\link[base:I]{base::I()}}).}
+}
+\description{
+\code{switch_lang()} dispatches clauses based on the subtype of call, as
+determined by \code{lang_type_of()}. The subtypes are based on the type
+of call head (see details).
+}
+\details{
+Calls (objects of type \code{language}) do not necessarily call a named
+function. They can also call an anonymous function or the result of
+some other expression. The language subtypes are organised around
+the kind of object being called:
+\itemize{
+\item For regular calls to named function, \code{switch_lang()} returns
+"named".
+\item Sometimes the function being called is the result of another
+function call, e.g. \code{foo()()}, or the result of another
+subsetting call, e.g. \code{foo$bar()} or \code{foo at bar()}. In this case,
+the call head is not a symbol, it is another call (e.g. to the
+infix functions \code{$} or \code{@}). The call subtype is said to be
+"recursive".
+\item A special subset of recursive calls are namespaced calls like
+\code{foo::bar()}. \code{switch_lang()} returns "namespaced" for these
+calls. It is generally a good idea if your function treats
+\code{bar()} and \code{foo::bar()} similarly.
+\item Finally, it is possible to have a literal (see \code{\link[=is_expr]{is_expr()}} for a
+definition of literals) as call head. In most cases, this will be
+a function inlined in the call (this is sometimes an expedient
+way of dealing with scoping issues). For calls with a literal
+node head, \code{switch_lang()} returns "inlined". Note that if a call
+head contains a literal that is not function, something went
+wrong and using that object will probably make R crash.
+\code{switch_lang()} issues an error in this case.
+}
+
+The reason we use the term \emph{node head} is because calls are
+structured as tree objects. This makes sense because the best
+representation for language code is a parse tree, with the tree
+hierarchy determined by the order of operations. See \link{pairlist} for
+more on this.
+}
+\examples{
+# Named calls:
+lang_type_of(~foo())
+
+# Recursive calls:
+lang_type_of(~foo$bar())
+lang_type_of(~foo()())
+
+# Namespaced calls:
+lang_type_of(~base::list())
+
+# For an inlined call, let's inline a function in the head node:
+call <- quote(foo(letters))
+call[[1]] <- base::toupper
+
+call
+lang_type_of(call)
+}
diff --git a/man/switch_type.Rd b/man/switch_type.Rd
new file mode 100644
index 0000000..e3d88c7
--- /dev/null
+++ b/man/switch_type.Rd
@@ -0,0 +1,88 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{switch_type}
+\alias{switch_type}
+\alias{coerce_type}
+\alias{switch_class}
+\alias{coerce_class}
+\title{Dispatch on base types}
+\usage{
+switch_type(.x, ...)
+
+coerce_type(.x, .to, ...)
+
+switch_class(.x, ...)
+
+coerce_class(.x, .to, ...)
+}
+\arguments{
+\item{.x}{An object from which to dispatch.}
+
+\item{...}{Named clauses. The names should be types as returned by
+\code{\link[=type_of]{type_of()}}.}
+
+\item{.to}{This is useful when you switchpatch within a coercing
+function. If supplied, this should be a string indicating the
+target type. A catch-all clause is then added to signal an error
+stating the conversion failure. This type is prettified unless
+\code{.to} inherits from the S3 class \code{"AsIs"} (see \code{\link[base:I]{base::I()}}).}
+}
+\description{
+\code{switch_type()} is equivalent to
+\code{\link[base]{switch}(\link{type_of}(x, ...))}, while
+\code{switch_class()} switchpatches based on \code{class(x)}. The \code{coerce_}
+versions are intended for type conversion and provide a standard
+error message when conversion fails.
+}
+\examples{
+switch_type(3L,
+  double = "foo",
+  integer = "bar",
+  "default"
+)
+
+# Use the coerce_ version to get standardised error handling when no
+# type matches:
+to_chr <- function(x) {
+  coerce_type(x, "a chr",
+    integer = as.character(x),
+    double = as.character(x)
+  )
+}
+to_chr(3L)
+
+# Strings have their own type:
+switch_type("str",
+  character = "foo",
+  string = "bar",
+  "default"
+)
+
+# Use a fallthrough clause if you need to dispatch on all character
+# vectors, including strings:
+switch_type("str",
+  string = ,
+  character = "foo",
+  "default"
+)
+
+# special and builtin functions are treated as primitive, since
+# there is usually no reason to treat them differently:
+switch_type(base::list,
+  primitive = "foo",
+  "default"
+)
+switch_type(base::`$`,
+  primitive = "foo",
+  "default"
+)
+
+# closures are not primitives:
+switch_type(rlang::switch_type,
+  primitive = "foo",
+  "default"
+)
+}
+\seealso{
+\code{\link[=switch_lang]{switch_lang()}}
+}
diff --git a/man/sym.Rd b/man/sym.Rd
new file mode 100644
index 0000000..4af91bb
--- /dev/null
+++ b/man/sym.Rd
@@ -0,0 +1,23 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/expr-sym.R
+\name{sym}
+\alias{sym}
+\alias{syms}
+\title{Create a symbol or list of symbols}
+\usage{
+sym(x)
+
+syms(x)
+}
+\arguments{
+\item{x}{A string or list of strings.}
+}
+\value{
+A symbol for \code{sym()} and a list of symbols for \code{syms()}.
+}
+\description{
+These functions take strings as input and turn them into symbols.
+Contrarily to \code{as.name()}, they convert the strings to the native
+encoding beforehand. This is necessary because symbols remove
+silently the encoding mark of strings (see \code{\link[=set_str_encoding]{set_str_encoding()}}).
+}
diff --git a/man/tidyeval-data.Rd b/man/tidyeval-data.Rd
new file mode 100644
index 0000000..37bc00b
--- /dev/null
+++ b/man/tidyeval-data.Rd
@@ -0,0 +1,24 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval-tidy.R
+\docType{data}
+\name{tidyeval-data}
+\alias{tidyeval-data}
+\alias{.data}
+\title{Data pronoun for tidy evaluation}
+\format{An object of class \code{dictionary} of length 0.}
+\usage{
+.data
+}
+\description{
+This pronoun is installed by functions performing \link[=eval_tidy]{tidy
+evaluation}. It allows you to refer to overscoped data
+explicitly.
+}
+\details{
+You can import this object in your package namespace to avoid \code{R CMD check} errors when referring to overscoped objects.
+}
+\examples{
+quo <- quo(.data$foo)
+eval_tidy(quo, list(foo = "bar"))
+}
+\keyword{datasets}
diff --git a/man/type-predicates.Rd b/man/type-predicates.Rd
new file mode 100644
index 0000000..d4184c7
--- /dev/null
+++ b/man/type-predicates.Rd
@@ -0,0 +1,66 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{type-predicates}
+\alias{type-predicates}
+\alias{is_list}
+\alias{is_atomic}
+\alias{is_vector}
+\alias{is_integer}
+\alias{is_double}
+\alias{is_character}
+\alias{is_logical}
+\alias{is_raw}
+\alias{is_bytes}
+\alias{is_null}
+\title{Type predicates}
+\usage{
+is_list(x, n = NULL)
+
+is_atomic(x, n = NULL)
+
+is_vector(x, n = NULL)
+
+is_integer(x, n = NULL)
+
+is_double(x, n = NULL)
+
+is_character(x, n = NULL, encoding = NULL)
+
+is_logical(x, n = NULL)
+
+is_raw(x, n = NULL)
+
+is_bytes(x, n = NULL)
+
+is_null(x)
+}
+\arguments{
+\item{x}{Object to be tested.}
+
+\item{n}{Expected length of a vector.}
+
+\item{encoding}{Expected encoding of a string or character
+vector. One of \code{UTF-8}, \code{latin1}, or \code{unknown}.}
+}
+\description{
+These type predicates aim to make type testing in R more
+consistent. They are wrappers around \code{\link[base:typeof]{base::typeof()}}, so operate
+at a level beneath S3/S4 etc.
+}
+\details{
+Compared to base R functions:
+\itemize{
+\item The predicates for vectors include the \code{n} argument for
+pattern-matching on the vector length.
+\item Unlike \code{is.atomic()}, \code{is_atomic()} does not return \code{TRUE} for
+\code{NULL}.
+\item Unlike \code{is.vector()}, \code{is_vector()} test if an object is an
+atomic vector or a list. \code{is.vector} checks for the presence of
+attributes (other than name).
+\item \code{is_function()} returns \code{TRUE} only for regular functions, not
+special or primitive functions.
+}
+}
+\seealso{
+\link{bare-type-predicates} \link{scalar-type-predicates}
+}
diff --git a/man/type_of.Rd b/man/type_of.Rd
new file mode 100644
index 0000000..9a39a49
--- /dev/null
+++ b/man/type_of.Rd
@@ -0,0 +1,48 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/types.R
+\name{type_of}
+\alias{type_of}
+\title{Base type of an object}
+\usage{
+type_of(x)
+}
+\arguments{
+\item{x}{An R object.}
+}
+\description{
+This is equivalent to \code{\link[base:typeof]{base::typeof()}} with a few differences that
+make dispatching easier:
+\itemize{
+\item The type of one-sided formulas is "quote".
+\item The type of character vectors of length 1 is "string".
+\item The type of special and builtin functions is "primitive".
+}
+}
+\examples{
+type_of(10L)
+
+# Quosures are treated as a new base type but not formulas:
+type_of(quo(10L))
+type_of(~10L)
+
+# Compare to base::typeof():
+typeof(quo(10L))
+
+# Strings are treated as a new base type:
+type_of(letters)
+type_of(letters[[1]])
+
+# This is a bit inconsistent with the core language tenet that data
+# types are vectors. However, treating strings as a different
+# scalar type is quite helpful for switching on function inputs
+# since so many arguments expect strings:
+switch_type("foo", character = abort("vector!"), string = "result")
+
+# Special and builtin primitives are both treated as primitives.
+# That's because it is often irrelevant which type of primitive an
+# input is:
+typeof(list)
+typeof(`$`)
+type_of(list)
+type_of(`$`)
+}
diff --git a/man/vector-along.Rd b/man/vector-along.Rd
new file mode 100644
index 0000000..e1fa945
--- /dev/null
+++ b/man/vector-along.Rd
@@ -0,0 +1,53 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-ctor.R
+\name{vector-along}
+\alias{vector-along}
+\alias{lgl_along}
+\alias{int_along}
+\alias{dbl_along}
+\alias{chr_along}
+\alias{cpl_along}
+\alias{raw_along}
+\alias{bytes_along}
+\alias{list_along}
+\alias{rep_along}
+\title{Create vectors matching the length of a given vector}
+\usage{
+lgl_along(.x)
+
+int_along(.x)
+
+dbl_along(.x)
+
+chr_along(.x)
+
+cpl_along(.x)
+
+raw_along(.x)
+
+bytes_along(.x)
+
+list_along(.x)
+
+rep_along(.x, .y)
+}
+\arguments{
+\item{.x}{A vector.}
+
+\item{.y}{Values to repeat.}
+}
+\description{
+These functions take the idea of \code{\link[=seq_along]{seq_along()}} and generalise it to
+creating lists (\code{list_along}) and repeating values (\code{rep_along}).
+Except for \code{list_along()} and \code{raw_along()}, the empty vectors are
+filled with typed \code{missing} values.
+}
+\examples{
+x <- 0:5
+rep_along(x, 1:2)
+rep_along(x, 1)
+list_along(x)
+}
+\seealso{
+vector-len
+}
diff --git a/man/vector-coercion.Rd b/man/vector-coercion.Rd
new file mode 100644
index 0000000..aa2dbbc
--- /dev/null
+++ b/man/vector-coercion.Rd
@@ -0,0 +1,137 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-coercion.R
+\name{vector-coercion}
+\alias{vector-coercion}
+\alias{as_logical}
+\alias{as_integer}
+\alias{as_double}
+\alias{as_complex}
+\alias{as_character}
+\alias{as_string}
+\alias{as_list}
+\title{Coerce an object to a base type}
+\usage{
+as_logical(x)
+
+as_integer(x)
+
+as_double(x)
+
+as_complex(x)
+
+as_character(x, encoding = NULL)
+
+as_string(x, encoding = NULL)
+
+as_list(x)
+}
+\arguments{
+\item{x}{An object to coerce to a base type.}
+
+\item{encoding}{If non-null, passed to \code{\link[=set_chr_encoding]{set_chr_encoding()}} to add
+an encoding mark. This is only declarative, no encoding
+conversion is performed.}
+}
+\description{
+These are equivalent to the base functions (e.g. \code{\link[=as.logical]{as.logical()}},
+\code{\link[=as.list]{as.list()}}, etc), but perform coercion rather than conversion.
+This means they are not generic and will not call S3 conversion
+methods. They only attempt to coerce the base type of their
+input. In addition, they have stricter implicit coercion rules and
+will never attempt any kind of parsing. E.g. they will not try to
+figure out if a character vector represents integers or booleans.
+Finally, they have treat attributes consistently, unlike the base R
+functions: all attributes except names are removed.
+}
+\section{Coercion to logical and numeric atomic vectors}{
+
+\itemize{
+\item To logical vectors: Integer and integerish double vectors. See
+\code{\link[=is_integerish]{is_integerish()}}.
+\item To integer vectors: Logical and integerish double vectors.
+\item To double vectors: Logical and integer vectors.
+\item To complex vectors: Logical, integer and double vectors.
+}
+}
+
+\section{Coercion to character vectors}{
+
+
+\code{as_character()} and \code{as_string()} have an optional \code{encoding}
+argument to specify the encoding. R uses this information for
+internal handling of strings and character vectors. Note that this
+is only declarative, no encoding conversion is attempted. See
+\code{\link[=as_utf8_character]{as_utf8_character()}} and \code{\link[=as_native_character]{as_native_character()}} for coercing to a
+character vector and attempt encoding conversion.
+
+See also \code{\link[=set_chr_encoding]{set_chr_encoding()}} and \code{\link[=mut_utf8_locale]{mut_utf8_locale()}} for
+information about encodings and locales in R, and \code{\link[=string]{string()}} and
+\code{\link[=chr]{chr()}} for other ways of creating strings and character vectors.
+
+Note that only \code{as_string()} can coerce symbols to a scalar
+character vector. This makes the code more explicit and adds an
+extra type check.
+}
+
+\section{Coercion to lists}{
+
+
+\code{as_list()} only coerces vector and dictionary types (environments
+are an example of dictionary type). Unlike \code{\link[base:as.list]{base::as.list()}},
+\code{as_list()} removes all attributes except names.
+}
+
+\section{Effects of removing attributes}{
+
+
+A technical side-effect of removing the attributes of the input is
+that the underlying objects has to be copied. This has no
+performance implications in the case of lists because this is a
+shallow copy: only the list structure is copied, not the contents
+(see \code{\link[=duplicate]{duplicate()}}). However, be aware that atomic vectors
+containing large amounts of data will have to be copied.
+
+In general, any attribute modification creates a copy, which is why
+it is better to avoid using attributes with heavy atomic vectors.
+Uncopyable objects like environments and symbols are an exception
+to this rule: in this case, attributes modification happens in
+place and has side-effects.
+}
+
+\examples{
+# Coercing atomic vectors removes attributes with both base R and rlang:
+x <- structure(TRUE, class = "foo", bar = "baz")
+as.logical(x)
+
+# But coercing lists preserves attributes in base R but not rlang:
+l <- structure(list(TRUE), class = "foo", bar = "baz")
+as.list(l)
+as_list(l)
+
+# Implicit conversions are performed in base R but not rlang:
+as.logical(l)
+\dontrun{
+as_logical(l)
+}
+
+# Conversion methods are bypassed, making the result of the
+# coercion more predictable:
+as.list.foo <- function(x) "wrong"
+as.list(l)
+as_list(l)
+
+# The input is never parsed. E.g. character vectors of numbers are
+# not converted to numeric types:
+as.integer("33")
+\dontrun{
+as_integer("33")
+}
+
+
+# With base R tools there is no way to convert an environment to a
+# list without either triggering method dispatch, or changing the
+# original environment. as_list() makes it easy:
+x <- structure(as_env(mtcars[1:2]), class = "foobar")
+as.list.foobar <- function(x) abort("dont call me")
+as_list(x)
+}
diff --git a/man/vector-construction.Rd b/man/vector-construction.Rd
new file mode 100644
index 0000000..4ba7217
--- /dev/null
+++ b/man/vector-construction.Rd
@@ -0,0 +1,109 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-ctor.R
+\name{vector-construction}
+\alias{vector-construction}
+\alias{lgl}
+\alias{int}
+\alias{dbl}
+\alias{cpl}
+\alias{chr}
+\alias{bytes}
+\alias{ll}
+\title{Create vectors}
+\usage{
+lgl(...)
+
+int(...)
+
+dbl(...)
+
+cpl(...)
+
+chr(..., .encoding = NULL)
+
+bytes(...)
+
+ll(...)
+}
+\arguments{
+\item{...}{Components of the new vector. Bare lists and explicitly
+spliced lists are spliced.}
+
+\item{.encoding}{If non-null, passed to \code{\link[=set_chr_encoding]{set_chr_encoding()}} to add
+an encoding mark. This is only declarative, no encoding
+conversion is performed.}
+}
+\description{
+The atomic vector constructors are equivalent to \code{\link[=c]{c()}} but allow
+you to be more explicit about the output type. Implicit coercions
+(e.g. from integer to logical) follow the rules described in
+\link{vector-coercion}. In addition, all constructors support splicing:
+if you supply \link[=is_bare_list]{bare} lists or \link[=is_spliced]{explicitly
+spliced} lists, their contents are spliced into the
+output vectors (see below for details). \code{ll()} is a list
+constructor similar to \code{\link[base:list]{base::list()}} but with splicing semantics.
+}
+\section{Splicing}{
+
+
+Splicing is an operation similar to flattening one level of nested
+lists, e.g. with \code{\link[=unlist]{base::unlist(x, recursive =
+FALSE)}} or \code{purrr::flatten()}. \code{ll()} returns its arguments as a
+list, just like \code{list()} would, but inner lists qualifying for
+splicing are flattened. That is, their contents are embedded in the
+surrounding list. Similarly, \code{chr()} concatenates its arguments and
+returns them as a single character vector, but inner lists are
+flattened before concatenation.
+
+Whether an inner list qualifies for splicing is determined by the
+type of splicing semantics. All the atomic constructors like
+\code{chr()} have \emph{list splicing} semantics: \link[=is_bare_list]{bare} lists
+and \link[=is_spliced]{explicitly spliced} lists are spliced.
+
+There are two list constructors with different splicing
+semantics. \code{ll()} only splices lists explicitly marked with
+\code{\link[=splice]{splice()}}.
+}
+
+\examples{
+# These constructors are like a typed version of c():
+c(TRUE, FALSE)
+lgl(TRUE, FALSE)
+
+# They follow a restricted set of coercion rules:
+int(TRUE, FALSE, 20)
+
+# Lists can be spliced:
+dbl(10, list(1, 2L), TRUE)
+
+
+# They splice names a bit differently than c(). The latter
+# automatically composes inner and outer names:
+c(a = c(A = 10), b = c(B = 20, C = 30))
+
+# On the other hand, rlang's ctors use the inner names and issue a
+# warning to inform the user that the outer names are ignored:
+dbl(a = c(A = 10), b = c(B = 20, C = 30))
+dbl(a = c(1, 2))
+
+# As an exception, it is allowed to provide an outer name when the
+# inner vector is an unnamed scalar atomic:
+dbl(a = 1)
+
+# Spliced lists behave the same way:
+dbl(list(a = 1))
+dbl(list(a = c(A = 1)))
+
+# bytes() accepts integerish inputs
+bytes(1:10)
+bytes(0x01, 0xff, c(0x03, 0x05), list(10, 20, 30L))
+
+# The list constructor has explicit splicing semantics:
+ll(1, list(2))
+
+# Note that explicitly spliced lists are always spliced:
+ll(!!! list(1, 2))
+}
+\seealso{
+\code{\link[=ll]{ll()}}
+}
diff --git a/man/vector-len.Rd b/man/vector-len.Rd
new file mode 100644
index 0000000..45f4e30
--- /dev/null
+++ b/man/vector-len.Rd
@@ -0,0 +1,47 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/vector-ctor.R
+\name{vector-len}
+\alias{vector-len}
+\alias{lgl_len}
+\alias{int_len}
+\alias{dbl_len}
+\alias{chr_len}
+\alias{cpl_len}
+\alias{raw_len}
+\alias{bytes_len}
+\alias{list_len}
+\title{Create vectors matching a given length}
+\usage{
+lgl_len(.n)
+
+int_len(.n)
+
+dbl_len(.n)
+
+chr_len(.n)
+
+cpl_len(.n)
+
+raw_len(.n)
+
+bytes_len(.n)
+
+list_len(.n)
+}
+\arguments{
+\item{.n}{The vector length.}
+}
+\description{
+These functions construct vectors of given length, with attributes
+specified via dots. Except for \code{list_len()} and \code{bytes_len()}, the
+empty vectors are filled with typed \link{missing} values. This is in
+contrast to the base function \code{\link[base:vector]{base::vector()}} which creates
+zero-filled vectors.
+}
+\examples{
+list_len(10)
+lgl_len(10)
+}
+\seealso{
+vector-along
+}
diff --git a/man/with_env.Rd b/man/with_env.Rd
new file mode 100644
index 0000000..4d9bddc
--- /dev/null
+++ b/man/with_env.Rd
@@ -0,0 +1,61 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/eval.R
+\name{with_env}
+\alias{with_env}
+\alias{locally}
+\title{Evaluate an expression within a given environment}
+\usage{
+with_env(env, expr)
+
+locally(expr)
+}
+\arguments{
+\item{env}{An environment within which to evaluate \code{expr}. Can be
+an object with an \code{\link[=get_env]{get_env()}} method.}
+
+\item{expr}{An expression to evaluate.}
+}
+\description{
+These functions evaluate \code{expr} within a given environment (\code{env}
+for \code{with_env()}, or the child of the current environment for
+\code{locally}). They rely on \code{\link[=eval_bare]{eval_bare()}} which features a lighter
+evaluation mechanism than base R \code{\link[base:eval]{base::eval()}}, and which also has
+some subtle implications when evaluting stack sensitive functions
+(see help for \code{\link[=eval_bare]{eval_bare()}}).
+}
+\details{
+\code{locally()} is equivalent to the base function
+\code{\link[base:local]{base::local()}} but it produces a much cleaner
+evaluation stack, and has stack-consistent semantics. It is thus
+more suited for experimenting with the R language.
+}
+\examples{
+# with_env() is handy to create formulas with a given environment:
+env <- child_env("rlang")
+f <- with_env(env, ~new_formula())
+identical(f_env(f), env)
+
+# Or functions with a given enclosure:
+fn <- with_env(env, function() NULL)
+identical(get_env(fn), env)
+
+
+# Unlike eval() it doesn't create duplicates on the evaluation
+# stack. You can thus use it e.g. to create non-local returns:
+fn <- function() {
+  g(get_env())
+  "normal return"
+}
+g <- function(env) {
+  with_env(env, return("early return"))
+}
+fn()
+
+
+# Since env is passed to as_env(), it can be any object with an
+# as_env() method. For strings, the pkg_env() is returned:
+with_env("base", ~mtcars)
+
+# This can be handy to put dictionaries in scope:
+with_env(mtcars, cyl)
+}
diff --git a/man/with_handlers.Rd b/man/with_handlers.Rd
new file mode 100644
index 0000000..7dd3c3f
--- /dev/null
+++ b/man/with_handlers.Rd
@@ -0,0 +1,90 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-handlers.R
+\name{with_handlers}
+\alias{with_handlers}
+\title{Establish handlers on the stack}
+\usage{
+with_handlers(.expr, ...)
+}
+\arguments{
+\item{.expr}{An expression to execute in a context where new
+handlers are established. The underscored version takes a quoted
+expression or a quoted formula.}
+
+\item{...}{Named handlers. Handlers should inherit from \code{exiting}
+or \code{inplace}. See \code{\link[=exiting]{exiting()}} and \code{\link[=inplace]{inplace()}} for constructing
+such handlers. Dots are evaluated with \link[=dots_list]{explicit
+splicing}.}
+}
+\description{
+Condition handlers are functions established on the evaluation
+stack (see \code{\link[=ctxt_stack]{ctxt_stack()}}) that are called by R when a condition is
+signalled (see \code{\link[=cnd_signal]{cnd_signal()}} and \code{\link[=abort]{abort()}} for two common signal
+functions). They come in two types: exiting handlers, which jump
+out of the signalling context and are transferred to
+\code{with_handlers()} before being executed. And inplace handlers,
+which are executed within the signal functions.
+}
+\details{
+An exiting handler is taking charge of the condition. No other
+handler on the stack gets a chance to handle the condition. The
+handler is executed and \code{with_handlers()} returns the return value
+of that handler. On the other hand, in place handlers do not
+necessarily take charge. If they return normally, they decline to
+handle the condition, and R looks for other handlers established on
+the evaluation stack. Only by jumping to an earlier call frame can
+an inplace handler take charge of the condition and stop the
+signalling process. Sometimes, a muffling restart has been
+established for the purpose of jumping out of the signalling
+function but not out of the context where the condition was
+signalled, which allows execution to resume normally. See
+\code{\link[=rst_muffle]{rst_muffle()}} the \code{muffle} argument of \code{\link[=inplace]{inplace()}} and the
+\code{mufflable} argument of \code{\link[=cnd_signal]{cnd_signal()}}.
+
+Exiting handlers are established first by \code{with_handlers()}, and in
+place handlers are installed in second place. The latter handlers
+thus take precedence over the former.
+}
+\examples{
+# Signal a condition with cnd_signal():
+fn <- function() {
+  g()
+  cat("called?\\n")
+  "fn() return value"
+}
+g <- function() {
+  h()
+  cat("called?\\n")
+}
+h <- function() {
+  cnd_signal("foo")
+  cat("called?\\n")
+}
+
+# Exiting handlers jump to with_handlers() before being
+# executed. Their return value is handed over:
+handler <- function(c) "handler return value"
+with_handlers(fn(), foo = exiting(handler))
+
+# In place handlers are called in turn and their return value is
+# ignored. Returning just means they are declining to take charge of
+# the condition. However, they can produce side-effects such as
+# displaying a message:
+some_handler <- function(c) cat("some handler!\\n")
+other_handler <- function(c) cat("other handler!\\n")
+with_handlers(fn(), foo = inplace(some_handler), foo = inplace(other_handler))
+
+# If an in place handler jumps to an earlier context, it takes
+# charge of the condition and no other handler gets a chance to
+# deal with it. The canonical way of transferring control is by
+# jumping to a restart. See with_restarts() and restarting()
+# documentation for more on this:
+exiting_handler <- function(c) rst_jump("rst_foo")
+fn2 <- function() {
+  with_restarts(g(), rst_foo = function() "restart value")
+}
+with_handlers(fn2(), foo = inplace(exiting_handler), foo = inplace(other_handler))
+}
+\seealso{
+\code{\link[=exiting]{exiting()}}, \code{\link[=inplace]{inplace()}}.
+}
diff --git a/man/with_restarts.Rd b/man/with_restarts.Rd
new file mode 100644
index 0000000..e0c79ab
--- /dev/null
+++ b/man/with_restarts.Rd
@@ -0,0 +1,120 @@
+% Generated by roxygen2: do not edit by hand
+% Please edit documentation in R/cnd-restarts.R
+\name{with_restarts}
+\alias{with_restarts}
+\title{Establish a restart point on the stack}
+\usage{
+with_restarts(.expr, ...)
+}
+\arguments{
+\item{.expr}{An expression to execute with new restarts established
+on the stack. This argument is passed by expression and supports
+\link[=quasiquotation]{unquoting}. It is evaluated in a context where
+restarts are established.}
+
+\item{...}{Named restart functions. The name is taken as the
+restart name and the function is executed after the jump. These
+dots are evaluated with \link[=dots_list]{explicit splicing}.}
+}
+\description{
+Restart points are named functions that are established with
+\code{with_restarts()}. Once established, you can interrupt the normal
+execution of R code, jump to the restart, and resume execution from
+there. Each restart is established along with a restart function
+that is executed after the jump and that provides a return value
+from the establishing point (i.e., a return value for
+\code{with_restarts()}).
+}
+\details{
+Restarts are not the only way of jumping to a previous call frame
+(see \code{\link[=return_from]{return_from()}} or \code{\link[=return_to]{return_to()}}). However, they have the advantage of
+being callable by name once established.
+}
+\examples{
+# Restarts are not the only way to jump to a previous frame, but
+# they have the advantage of being callable by name:
+fn <- function() with_restarts(g(), my_restart = function() "returned")
+g <- function() h()
+h <- function() { rst_jump("my_restart"); "not returned" }
+fn()
+
+# Whereas a non-local return requires to manually pass the calling
+# frame to the return function:
+fn <- function() g(get_env())
+g <- function(env) h(env)
+h <- function(env) { return_from(env, "returned"); "not returned" }
+fn()
+
+
+# rst_maybe_jump() checks that a restart exists before trying to jump:
+fn <- function() {
+  g()
+  cat("will this be called?\\n")
+}
+g <- function() {
+  rst_maybe_jump("my_restart")
+  cat("will this be called?\\n")
+}
+
+# Here no restart are on the stack:
+fn()
+
+# If a restart point called `my_restart` was established on the
+# stack before calling fn(), the control flow will jump there:
+rst <- function() {
+  cat("restarting...\\n")
+  "return value"
+}
+with_restarts(fn(), my_restart = rst)
+
+
+# Restarts are particularly useful to provide alternative default
+# values when the normal output cannot be computed:
+
+fn <- function(valid_input) {
+  if (valid_input) {
+    return("normal value")
+  }
+
+  # We decide to return the empty string "" as default value. An
+  # altenative strategy would be to signal an error. In any case,
+  # we want to provide a way for the caller to get a different
+  # output. For this purpose, we provide two restart functions that
+  # returns alternative defaults:
+  restarts <- list(
+    rst_empty_chr = function() character(0),
+    rst_null = function() NULL
+  )
+
+  with_restarts(splice(restarts), .expr = {
+
+    # Signal a typed condition to let the caller know that we are
+    # about to return an empty string as default value:
+    cnd_signal("default_empty_string")
+
+    # If no jump to with_restarts, return default value:
+    ""
+  })
+}
+
+# Normal value for valid input:
+fn(TRUE)
+
+# Default value for bad input:
+fn(FALSE)
+
+# Change the default value if you need an empty character vector by
+# defining an inplace handler that jumps to the restart. It has to
+# be inplace because exiting handlers jump to the place where they
+# are established before being executed, and the restart is not
+# defined anymore at that point:
+rst_handler <- inplace(function(c) rst_jump("rst_empty_chr"))
+with_handlers(fn(FALSE), default_empty_string = rst_handler)
+
+# You can use restarting() to create restarting handlers easily:
+with_handlers(fn(FALSE), default_empty_string = restarting("rst_null"))
+}
+\seealso{
+\code{\link[=return_from]{return_from()}} and \code{\link[=return_to]{return_to()}} for a more flexible way
+of performing a non-local jump to an arbitrary call frame.
+}
diff --git a/src/attrs.c b/src/attrs.c
new file mode 100644
index 0000000..e2f7393
--- /dev/null
+++ b/src/attrs.c
@@ -0,0 +1,18 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+
+// These change attributes in-place.
+
+SEXP rlang_zap_attrs(SEXP x) {
+  SET_ATTRIB(x, R_NilValue);
+  return x;
+}
+
+SEXP rlang_set_attrs(SEXP x, SEXP attrs) {
+  SET_ATTRIB(x, attrs);
+  return x;
+}
+
+SEXP rlang_get_attrs(SEXP x) {
+  return ATTRIB(x);
+}
diff --git a/src/capture.c b/src/capture.c
new file mode 100644
index 0000000..0ebec71
--- /dev/null
+++ b/src/capture.c
@@ -0,0 +1,111 @@
+#include <Rinternals.h>
+
+#define attribute_hidden
+#define _(string) (string)
+
+// from symbol.c
+SEXP unescape_sexp(SEXP chr);
+
+
+SEXP attribute_hidden capture_arg(SEXP x, SEXP env) {
+    static SEXP nms = NULL;
+    if (!nms) {
+        nms = allocVector(STRSXP, 2);
+        R_PreserveObject(nms);
+        SET_STRING_ELT(nms, 0, mkChar("expr"));
+        SET_STRING_ELT(nms, 1, mkChar("env"));
+    }
+
+    SEXP info = PROTECT(allocVector(VECSXP, 2));
+    SET_VECTOR_ELT(info, 0, x);
+    SET_VECTOR_ELT(info, 1, env);
+    setAttrib(info, R_NamesSymbol, nms);
+
+    UNPROTECT(1);
+    return info;
+}
+
+SEXP attribute_hidden capture_promise(SEXP x, int strict) {
+    // If promise was optimised away, return the literal
+    if (TYPEOF(x) != PROMSXP)
+        return capture_arg(x, R_EmptyEnv);
+
+    SEXP env = R_NilValue;
+    while (TYPEOF(x) == PROMSXP) {
+        env = PRENV(x);
+        x = PREXPR(x);
+    }
+    if (env == R_NilValue) {
+        if (strict)
+            error(_("the argument has already been evaluated"));
+        else
+            return R_NilValue;
+    }
+
+    if (NAMED(x) < 2)
+        SET_NAMED(x, 2);
+    return capture_arg(x, env);
+}
+
+SEXP attribute_hidden rlang_capturearg(SEXP call, SEXP op, SEXP args, SEXP rho)
+{
+    int strict = asLogical(CADR(args));
+    SEXP arg = findVarInFrame3(rho, install("x"), TRUE);
+
+    if (TYPEOF(arg) == PROMSXP) {
+        // Get promise in caller frame
+        SEXP caller_env = CAR(args);
+        SEXP sym = PREXPR(arg);
+        if (TYPEOF(sym) != SYMSXP)
+            error(_("\"x\" must be an argument name"));
+
+        arg = findVarInFrame3(caller_env, sym, TRUE);
+        return capture_promise(arg, strict);
+    } else {
+        // Argument was optimised away
+        return capture_arg(arg, R_EmptyEnv);
+    }
+}
+
+SEXP attribute_hidden rlang_capturedots(SEXP call, SEXP op, SEXP args, SEXP rho)
+{
+    SEXP caller_env = CAR(args);
+    int strict = asLogical(CADR(args));
+
+    // R code has checked for unbound dots
+    SEXP dots = findVarInFrame3(caller_env, R_DotsSymbol, TRUE);
+
+    if (dots == R_MissingArg)
+        return allocVector(VECSXP, 0);
+
+    int n_dots = length(dots);
+    SEXP captured = PROTECT(allocVector(VECSXP, n_dots));
+    SEXP names = PROTECT(allocVector(STRSXP, n_dots));
+    setAttrib(captured, R_NamesSymbol, names);
+
+    SEXP dot;
+    int i = 0;
+    while (i != n_dots) {
+        dot = CAR(dots);
+
+        if (TYPEOF(dot) == PROMSXP) {
+            dot = capture_promise(dot, strict);
+            if (dot == R_NilValue) {
+                UNPROTECT(2);
+                return R_NilValue;
+            }
+        } else {
+            dot = capture_arg(dot, R_EmptyEnv);
+        }
+        SET_VECTOR_ELT(captured, i, dot);
+
+        if (TAG(dots) != R_NilValue)
+            SET_STRING_ELT(names, i, unescape_sexp(PRINTNAME(TAG(dots))));
+
+        ++i;
+        dots = CDR(dots);
+    }
+
+    UNPROTECT(2);
+    return captured;
+}
diff --git a/src/env.c b/src/env.c
new file mode 100644
index 0000000..ce14545
--- /dev/null
+++ b/src/env.c
@@ -0,0 +1,7 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+
+SEXP r_mut_env_parent(SEXP env, SEXP new_parent) {
+  SET_ENCLOS(env, new_parent);
+  return env;
+}
diff --git a/src/eval.c b/src/eval.c
new file mode 100644
index 0000000..6979805
--- /dev/null
+++ b/src/eval.c
@@ -0,0 +1,6 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+
+SEXP rlang_eval(SEXP expr, SEXP env) {
+  return Rf_eval(expr, env);
+}
diff --git a/src/export.c b/src/export.c
new file mode 100644
index 0000000..7b8fb83
--- /dev/null
+++ b/src/export.c
@@ -0,0 +1,37 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+#include <Rversion.h>
+
+#include "export.h"
+
+#if (defined(R_VERSION) && R_VERSION < R_Version(3, 4, 0))
+SEXP R_MakeExternalPtrFn(DL_FUNC p, SEXP tag, SEXP prot) {
+  fn_ptr ptr;
+  ptr.fn = p;
+  return R_MakeExternalPtr(ptr.p, tag, prot);
+}
+DL_FUNC R_ExternalPtrAddrFn(SEXP s) {
+  fn_ptr ptr;
+  ptr.p = EXTPTR_PTR(s);
+  return ptr.fn;
+}
+#endif
+
+SEXP rlang_namespace(const char* ns) {
+  SEXP call = PROTECT(Rf_lang2(Rf_install("getNamespace"), Rf_mkString(ns)));
+  SEXP ns_env = Rf_eval(call, R_BaseEnv);
+  UNPROTECT(1);
+  return ns_env;
+}
+
+void rlang_register_pointer(const char* ns, const char* ptr_name, DL_FUNC fn) {
+  SEXP ptr = PROTECT(R_MakeExternalPtrFn(fn, R_NilValue, R_NilValue));
+
+  SEXP ptr_obj = PROTECT(Rf_allocVector(VECSXP, 1));
+  SET_VECTOR_ELT(ptr_obj, 0, ptr);
+
+  Rf_setAttrib(ptr_obj, R_ClassSymbol, Rf_mkString("fn_pointer"));
+
+  Rf_defineVar(Rf_install(ptr_name), ptr_obj, rlang_namespace(ns));
+  UNPROTECT(2);
+}
diff --git a/src/export.h b/src/export.h
new file mode 100644
index 0000000..c255f7b
--- /dev/null
+++ b/src/export.h
@@ -0,0 +1,18 @@
+#ifndef RLANG_EXPORT_H
+#define RLANG_EXPORT_H
+
+#define R_NO_REMAP
+#include <Rinternals.h>
+#include <Rversion.h>
+#include <R_ext/Rdynload.h>
+
+
+#if (defined(R_VERSION) && R_VERSION < R_Version(3, 4, 0))
+typedef union {void* p; DL_FUNC fn;} fn_ptr;
+SEXP R_MakeExternalPtrFn(DL_FUNC p, SEXP tag, SEXP prot);
+DL_FUNC R_ExternalPtrAddrFn(SEXP s);
+#endif
+
+void rlang_register_pointer(const char* ns, const char* ptr_name, DL_FUNC fn);
+
+#endif
diff --git a/src/formula.c b/src/formula.c
new file mode 100644
index 0000000..6f8900b
--- /dev/null
+++ b/src/formula.c
@@ -0,0 +1,76 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+#include <stdbool.h>
+
+
+SEXP f_rhs_(SEXP f) {
+  if (TYPEOF(f) != LANGSXP)
+    Rf_errorcall(R_NilValue, "`x` must be a formula");
+
+  switch (Rf_length(f)) {
+  case 2: return CADR(f);
+  case 3: return CADDR(f);
+  default: Rf_errorcall(R_NilValue, "Invalid formula");
+  }
+}
+SEXP f_lhs_(SEXP f) {
+  if (TYPEOF(f) != LANGSXP)
+    Rf_errorcall(R_NilValue, "`x` must be a formula");
+
+  switch (Rf_length(f)) {
+  case 2: return R_NilValue;
+  case 3: return CADR(f);
+  default: Rf_errorcall(R_NilValue, "Invalid formula");
+  }
+}
+SEXP f_env_(SEXP f) {
+  return Rf_getAttrib(f, Rf_install(".Environment"));
+}
+
+bool is_formulaish(SEXP x, int scoped, int lhs) {
+  if (TYPEOF(x) != LANGSXP)
+    return false;
+
+  SEXP head = CAR(x);
+  if (head != Rf_install("~") && head != Rf_install(":="))
+    return false;
+
+  if (scoped >= 0) {
+    int has_env = TYPEOF(f_env_(x)) == ENVSXP;
+    if (scoped != has_env)
+      return false;
+  }
+
+  if (lhs >= 0) {
+    int has_lhs = Rf_length(x) > 2;
+    if (lhs != has_lhs)
+      return false;
+  }
+
+  return true;
+}
+
+bool is_formula(SEXP x) {
+  if (!is_formulaish(x, -1, -1))
+    return false;
+
+  return CAR(x) == Rf_install("~");
+}
+
+
+// Export
+
+int lgl_optional(SEXP lgl) {
+  if (lgl == R_NilValue)
+    return -1;
+  else
+    return Rf_asLogical(lgl);
+}
+
+SEXP rlang_is_formulaish(SEXP x, SEXP scoped, SEXP lhs) {
+  int scoped_int = lgl_optional(scoped);
+  int lhs_int = lgl_optional(lhs);
+
+  bool out = is_formulaish(x, scoped_int, lhs_int);
+  return Rf_ScalarLogical(out);
+}
diff --git a/src/formula.h b/src/formula.h
new file mode 100644
index 0000000..f47f55e
--- /dev/null
+++ b/src/formula.h
@@ -0,0 +1,11 @@
+#ifndef RLANG_FORMULA_H
+#define RLANG_FORMULA_H
+
+
+SEXP rlang_is_formulaish(SEXP x, SEXP scoped, SEXP lhs);
+SEXP f_rhs_(SEXP f);
+SEXP f_lhs_(SEXP f);
+SEXP f_env_(SEXP f);
+
+
+#endif
diff --git a/src/init.c b/src/init.c
new file mode 100644
index 0000000..c1f2777
--- /dev/null
+++ b/src/init.c
@@ -0,0 +1,104 @@
+#include <Rinternals.h>
+#include <R_ext/Rdynload.h>
+#include <stdbool.h>
+
+#include "export.h"
+
+// Callable from other packages
+extern SEXP rlang_new_dictionary(SEXP, SEXP, SEXP);
+extern SEXP rlang_squash_if(SEXP, SEXPTYPE, bool (*is_spliceable)(SEXP), int);
+extern bool is_clevel_spliceable(SEXP);
+
+// Callable from this package
+extern SEXP f_lhs_(SEXP);
+extern SEXP f_rhs_(SEXP);
+extern SEXP r_mut_env_parent(SEXP, SEXP);
+extern SEXP rlang_replace_na(SEXP, SEXP);
+extern SEXP rlang_car(SEXP);
+extern SEXP rlang_cdr(SEXP);
+extern SEXP rlang_caar(SEXP);
+extern SEXP rlang_cadr(SEXP);
+extern SEXP rlang_cdar(SEXP);
+extern SEXP rlang_cddr(SEXP);
+extern SEXP rlang_set_car(SEXP, SEXP);
+extern SEXP rlang_set_cdr(SEXP, SEXP);
+extern SEXP rlang_set_caar(SEXP, SEXP);
+extern SEXP rlang_set_cadr(SEXP, SEXP);
+extern SEXP rlang_set_cdar(SEXP, SEXP);
+extern SEXP rlang_set_cddr(SEXP, SEXP);
+extern SEXP rlang_cons(SEXP, SEXP);
+extern SEXP rlang_duplicate(SEXP);
+extern SEXP rlang_shallow_duplicate(SEXP);
+extern SEXP rlang_tag(SEXP);
+extern SEXP rlang_set_tag(SEXP);
+extern SEXP rlang_eval(SEXP, SEXP);
+extern SEXP rlang_zap_attrs(SEXP);
+extern SEXP rlang_get_attrs(SEXP);
+extern SEXP rlang_set_attrs(SEXP, SEXP);
+extern SEXP rlang_interp(SEXP, SEXP, SEXP);
+extern SEXP rlang_is_formulaish(SEXP, SEXP, SEXP);
+extern SEXP rlang_is_reference(SEXP, SEXP);
+extern SEXP rlang_sxp_address(SEXP);
+extern SEXP rlang_length(SEXP);
+extern SEXP rlang_new_dictionary(SEXP, SEXP, SEXP);
+extern SEXP rlang_squash(SEXP, SEXP, SEXP, SEXP);
+extern SEXP rlang_symbol(SEXP);
+extern SEXP rlang_symbol_to_character(SEXP);
+extern SEXP rlang_unescape_character(SEXP);
+extern SEXP capture_arg(SEXP, SEXP);
+extern SEXP rlang_capturearg(SEXP, SEXP, SEXP, SEXP);
+extern SEXP rlang_capturedots(SEXP, SEXP, SEXP, SEXP);
+extern SEXP rlang_new_language(SEXP, SEXP);
+
+static const R_CallMethodDef call_entries[] = {
+  {"f_lhs_",                    (DL_FUNC) &f_lhs_, 1},
+  {"f_rhs_",                    (DL_FUNC) &f_rhs_, 1},
+  {"rlang_replace_na",          (DL_FUNC) &rlang_replace_na, 2},
+  {"rlang_caar",                (DL_FUNC) &rlang_caar, 1},
+  {"rlang_cadr",                (DL_FUNC) &rlang_cadr, 1},
+  {"rlang_capturearg",          (DL_FUNC) &rlang_capturearg, 4},
+  {"rlang_capturedots",         (DL_FUNC) &rlang_capturedots, 4},
+  {"rlang_car",                 (DL_FUNC) &rlang_car, 1},
+  {"rlang_cdar",                (DL_FUNC) &rlang_cdar, 1},
+  {"rlang_cddr",                (DL_FUNC) &rlang_cddr, 1},
+  {"rlang_cdr",                 (DL_FUNC) &rlang_cdr, 1},
+  {"rlang_cons",                (DL_FUNC) &rlang_cons, 2},
+  {"rlang_duplicate",           (DL_FUNC) &rlang_duplicate, 1},
+  {"rlang_eval",                (DL_FUNC) &rlang_eval, 2},
+  {"rlang_get_attrs",           (DL_FUNC) &rlang_get_attrs, 1},
+  {"rlang_interp",              (DL_FUNC) &rlang_interp, 3},
+  {"rlang_is_formulaish",       (DL_FUNC) &rlang_is_formulaish, 3},
+  {"rlang_is_reference",        (DL_FUNC) &rlang_is_reference, 2},
+  {"rlang_length",              (DL_FUNC) &rlang_length, 1},
+  {"rlang_new_dictionary",      (DL_FUNC) &rlang_new_dictionary, 3},
+  {"rlang_set_attrs",           (DL_FUNC) &rlang_set_attrs, 2},
+  {"rlang_set_caar",            (DL_FUNC) &rlang_set_caar, 2},
+  {"rlang_set_cadr",            (DL_FUNC) &rlang_set_cadr, 2},
+  {"rlang_set_car",             (DL_FUNC) &rlang_set_car, 2},
+  {"rlang_set_cdar",            (DL_FUNC) &rlang_set_cdar, 2},
+  {"rlang_set_cddr",            (DL_FUNC) &rlang_set_cddr, 2},
+  {"rlang_set_cdr",             (DL_FUNC) &rlang_set_cdr, 2},
+  {"rlang_mut_env_parent",      (DL_FUNC) &r_mut_env_parent, 2},
+  {"rlang_set_tag",             (DL_FUNC) &rlang_set_tag, 2},
+  {"rlang_shallow_duplicate",   (DL_FUNC) &rlang_shallow_duplicate, 1},
+  {"rlang_squash",              (DL_FUNC) &rlang_squash, 4},
+  {"rlang_sxp_address",         (DL_FUNC) &rlang_sxp_address, 1},
+  {"rlang_symbol",              (DL_FUNC) &rlang_symbol, 1},
+  {"rlang_symbol_to_character", (DL_FUNC) &rlang_symbol_to_character, 1},
+  {"rlang_tag",                 (DL_FUNC) &rlang_tag, 1},
+  {"rlang_unescape_character",  (DL_FUNC) &rlang_unescape_character, 1},
+  {"rlang_zap_attrs",           (DL_FUNC) &rlang_zap_attrs, 1},
+  {"rlang_new_language",        (DL_FUNC) &rlang_new_language, 2},
+  {NULL, NULL, 0}
+};
+
+void R_init_rlang(DllInfo* dll) {
+  // Register functions callable from other packages
+  R_RegisterCCallable("rlang", "rlang_new_dictionary", (DL_FUNC) &rlang_new_dictionary);
+  R_RegisterCCallable("rlang", "rlang_squash_if", (DL_FUNC) &rlang_squash_if);
+  rlang_register_pointer("rlang", "test_is_spliceable", (DL_FUNC) &is_clevel_spliceable);
+
+  // Register functions callable from this package
+  R_registerRoutines(dll, NULL, call_entries, NULL, NULL);
+  R_useDynamicSymbols(dll, FALSE);
+}
diff --git a/src/lang.c b/src/lang.c
new file mode 100644
index 0000000..35c0c21
--- /dev/null
+++ b/src/lang.c
@@ -0,0 +1,6 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+
+SEXP rlang_new_language(SEXP head, SEXP tail) {
+  return Rf_lcons(head, tail);
+}
diff --git a/src/pairlist.c b/src/pairlist.c
new file mode 100644
index 0000000..df3bc33
--- /dev/null
+++ b/src/pairlist.c
@@ -0,0 +1,64 @@
+#include <Rinternals.h>
+
+SEXP rlang_car(SEXP x) {
+  return CAR(x);
+}
+SEXP rlang_cdr(SEXP x) {
+  return CDR(x);
+}
+SEXP rlang_caar(SEXP x) {
+  return CAAR(x);
+}
+SEXP rlang_cadr(SEXP x) {
+  return CADR(x);
+}
+SEXP rlang_cdar(SEXP x) {
+  return CDAR(x);
+}
+SEXP rlang_cddr(SEXP x) {
+  return CDDR(x);
+}
+
+SEXP rlang_set_car(SEXP x, SEXP newcar) {
+  SETCAR(x, newcar);
+  return x;
+}
+SEXP rlang_set_cdr(SEXP x, SEXP newcdr) {
+  SETCDR(x, newcdr);
+  return x;
+}
+SEXP rlang_set_caar(SEXP x, SEXP newcaar) {
+  SETCAR(CAR(x), newcaar);
+  return x;
+}
+SEXP rlang_set_cadr(SEXP x, SEXP newcar) {
+  SETCADR(x, newcar);
+  return x;
+}
+SEXP rlang_set_cdar(SEXP x, SEXP newcdar) {
+  SETCDR(CAR(x), newcdar);
+  return x;
+}
+SEXP rlang_set_cddr(SEXP x, SEXP newcdr) {
+  SETCDR(CDR(x), newcdr);
+  return x;
+}
+
+SEXP rlang_cons(SEXP car, SEXP cdr) {
+  return CONS(car, cdr);
+}
+
+SEXP rlang_duplicate(SEXP x) {
+  return Rf_duplicate(x);
+}
+SEXP rlang_shallow_duplicate(SEXP x) {
+  return Rf_shallow_duplicate(x);
+}
+
+SEXP rlang_tag(SEXP x) {
+  return TAG(x);
+}
+SEXP rlang_set_tag(SEXP x, SEXP tag) {
+  SET_TAG(x, tag);
+  return x;
+}
diff --git a/src/replace-na.c b/src/replace-na.c
new file mode 100644
index 0000000..76c17ae
--- /dev/null
+++ b/src/replace-na.c
@@ -0,0 +1,129 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+
+SEXP replace_na_(SEXP x, SEXP replacement, int start);
+
+SEXP rlang_replace_na(SEXP x, SEXP replacement) {
+  int n = Rf_length(x);
+  int i = 0;
+
+  switch(TYPEOF(x)) {
+  case LGLSXP: {
+    int* arr = LOGICAL(x);
+    for (; i < n; ++i) {
+      if (arr[i] == NA_LOGICAL)
+        break;
+    }
+    break;
+  }
+
+  case INTSXP: {
+    int* arr = INTEGER(x);
+    for (; i < n; ++i) {
+      if (arr[i] == NA_INTEGER)
+        break;
+    }
+    break;
+  }
+
+  case REALSXP: {
+    double* arr = REAL(x);
+    for (; i < n; ++i) {
+      if (ISNA(arr[i]))
+        break;
+    }
+    break;
+  }
+
+  case STRSXP: {
+    for (; i < n; ++i) {
+      if (STRING_ELT(x, i) == NA_STRING)
+        break;
+    }
+    break;
+  }
+
+  case CPLXSXP: {
+    Rcomplex* arr = COMPLEX(x);
+
+    for (; i < n; ++i) {
+      if (ISNA(arr[i].r))
+        break;
+    }
+    break;
+  }
+
+  default: {
+    Rf_errorcall(R_NilValue, "Don't know how to handle object of type", Rf_type2char(TYPEOF(x)));
+  }
+  }
+
+  if (i < n)
+    return replace_na_(x, replacement, i);
+  else
+    return x;
+}
+
+SEXP replace_na_(SEXP x, SEXP replacement, int i) {
+  PROTECT(x = Rf_duplicate(x));
+  int n = Rf_length(x);
+
+  switch(TYPEOF(x)) {
+  case LGLSXP: {
+    int* arr = LOGICAL(x);
+    int new_value = LOGICAL(replacement)[0];
+    for (; i < n; ++i) {
+      if (arr[i] == NA_LOGICAL)
+        arr[i] = new_value;
+    }
+    break;
+  }
+
+  case INTSXP: {
+    int* arr = INTEGER(x);
+    int new_value = INTEGER(replacement)[0];
+    for (; i < n; ++i) {
+      if (arr[i] == NA_INTEGER)
+        arr[i] = new_value;
+    }
+    break;
+  }
+
+  case REALSXP: {
+    double* arr = REAL(x);
+    double new_value = REAL(replacement)[0];
+    for (; i < n; ++i) {
+      if (ISNA(arr[i]))
+        arr[i] = new_value;
+    }
+    break;
+  }
+
+  case STRSXP: {
+    SEXP new_value = STRING_ELT(replacement, 0);
+    for (; i < n; ++i) {
+      if (STRING_ELT(x, i) == NA_STRING)
+        SET_STRING_ELT(x, i, new_value);
+    }
+    break;
+  }
+
+  case CPLXSXP: {
+    Rcomplex* arr = COMPLEX(x);
+    Rcomplex new_value = COMPLEX(replacement)[0];
+
+    for (; i < n; ++i) {
+      if (ISNA(arr[i].r))
+        arr[i] = new_value;
+    }
+    break;
+  }
+
+  default: {
+    Rf_errorcall(R_NilValue, "Don't know how to handle object of type", Rf_type2char(TYPEOF(x)));
+  }
+  }
+
+  UNPROTECT(1);
+  return x;
+}
diff --git a/src/sexp.c b/src/sexp.c
new file mode 100644
index 0000000..c95593a
--- /dev/null
+++ b/src/sexp.c
@@ -0,0 +1,12 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+
+SEXP rlang_sxp_address(SEXP x) {
+  static char str[1000];
+  snprintf(str, 1000, "%p", (void*) x);
+  return Rf_mkString(str);
+}
+
+SEXP rlang_is_reference(SEXP x, SEXP y) {
+  return Rf_ScalarLogical(x == y);
+}
diff --git a/src/splice.c b/src/splice.c
new file mode 100644
index 0000000..1856927
--- /dev/null
+++ b/src/splice.c
@@ -0,0 +1,288 @@
+#include "utils.h"
+#include "vector.h"
+#include "export.h"
+
+
+typedef struct {
+  R_len_t size;
+  bool named;
+  bool warned;
+  bool recursive;
+} squash_info_t;
+
+squash_info_t squash_info_init(bool recursive) {
+  squash_info_t info;
+  info.size = 0;
+  info.named = false;
+  info.warned = false;
+  info.recursive = recursive;
+  return info;
+}
+
+
+// Atomic squashing ---------------------------------------------------
+
+static
+R_len_t atom_squash(SEXPTYPE kind, squash_info_t info,
+                    SEXP outer, SEXP out, R_len_t count,
+                    bool (*is_spliceable)(SEXP), int depth) {
+  if (TYPEOF(outer) != VECSXP)
+    Rf_errorcall(R_NilValue, "Only lists can be spliced");
+
+  SEXP inner;
+  SEXP out_names = names(out);
+  R_len_t n_outer = Rf_length(outer);
+  R_len_t n_inner;
+
+  for (R_len_t i = 0; i != n_outer; ++i) {
+    inner = VECTOR_ELT(outer, i);
+    n_inner = vec_length(inner);
+
+    if (depth != 0 && is_spliceable(inner)) {
+      count = atom_squash(kind, info, inner, out, count, is_spliceable, depth - 1);
+    } else if (n_inner) {
+      vec_copy_coerce_n(inner, n_inner, out, count, 0);
+
+      if (info.named) {
+        SEXP nms = names(inner);
+        if (is_character(nms))
+          vec_copy_n(nms, n_inner, out_names, count, 0);
+        else if (n_inner == 1 && has_name_at(outer, i))
+          SET_STRING_ELT(out_names, count, STRING_ELT(names(outer), i));
+      }
+
+      count += n_inner;
+    }
+  }
+
+  return count;
+}
+
+
+// List squashing -----------------------------------------------------
+
+R_len_t list_squash(squash_info_t info, SEXP outer,
+                    SEXP out, R_len_t count,
+                    bool (*is_spliceable)(SEXP), int depth) {
+  if (TYPEOF(outer) != VECSXP)
+    Rf_errorcall(R_NilValue, "Only lists can be spliced");
+
+  SEXP inner;
+  SEXP out_names = names(out);
+  R_len_t n_outer = Rf_length(outer);
+
+  for (R_len_t i = 0; i != n_outer; ++i) {
+    inner = VECTOR_ELT(outer, i);
+
+    if (depth != 0 && is_spliceable(inner)) {
+      count = list_squash(info, inner, out, count, is_spliceable, depth - 1);
+    } else {
+      SET_VECTOR_ELT(out, count, inner);
+
+      if (info.named && is_character(names(outer))) {
+        SEXP name = STRING_ELT(names(outer), i);
+        SET_STRING_ELT(out_names, count, name);
+      }
+
+      count += 1;
+    }
+  }
+
+  return count;
+}
+
+
+// First pass --------------------------------------------------------
+
+static
+void squash_warn_names(void) {
+  Rf_warningcall(R_NilValue, "Outer names are only allowed for unnamed scalar atomic inputs");
+}
+
+static
+void update_info_outer(squash_info_t* info, SEXP outer, R_len_t i) {
+  if (!info->warned && info->recursive && has_name_at(outer, i)) {
+    squash_warn_names();
+    info->warned = true;
+  }
+}
+static
+void update_info_inner(squash_info_t* info, SEXP outer, R_len_t i, SEXP inner) {
+  R_len_t n_inner = info->recursive ? 1 : vec_length(inner);
+  info->size += n_inner;
+
+  // Return early if possible
+  if (info->named && info->warned)
+    return;
+
+  bool named = is_character(names(inner));
+  bool recursive = info->recursive;
+
+  bool copy_outer = recursive || n_inner == 1;
+  bool copy_inner = !recursive;
+
+  if (named && copy_inner)
+    info->named = true;
+
+  if (has_name_at(outer, i)) {
+    if (!recursive && (n_inner != 1 || named) && !info->warned) {
+      squash_warn_names();
+      info->warned = true;
+    }
+    if (copy_outer)
+      info->named = true;
+  }
+}
+
+static
+void squash_info(squash_info_t* info, SEXP outer,
+                 bool (*is_spliceable)(SEXP), int depth) {
+  SEXP inner;
+  R_len_t n_inner;
+  R_len_t n_outer = Rf_length(outer);
+
+  for (R_len_t i = 0; i != n_outer; ++i) {
+    inner = VECTOR_ELT(outer, i);
+    n_inner = info->recursive ? 1 : vec_length(inner);
+
+    if (depth != 0 && is_spliceable(inner)) {
+      update_info_outer(info, outer, i);
+      squash_info(info, inner, is_spliceable, depth - 1);
+    } else if (n_inner) {
+      update_info_inner(info, outer, i, inner);
+    }
+  }
+}
+
+static
+SEXP squash(SEXPTYPE kind, SEXP dots, bool (*is_spliceable)(SEXP), int depth) {
+  bool recursive = kind == VECSXP;
+
+  squash_info_t info = squash_info_init(recursive);
+  squash_info(&info, dots, is_spliceable, depth);
+
+  SEXP out = PROTECT(Rf_allocVector(kind, info.size));
+  if (info.named)
+    set_names(out, Rf_allocVector(STRSXP, info.size));
+
+  if (recursive)
+    list_squash(info, dots, out, 0, is_spliceable, depth);
+  else
+    atom_squash(kind, info, dots, out, 0, is_spliceable, depth);
+
+  UNPROTECT(1);
+  return out;
+}
+
+
+// Predicates --------------------------------------------------------
+
+typedef bool (*is_spliceable_t)(SEXP);
+
+bool is_spliced_bare(SEXP x) {
+  return is_list(x) && (!OBJECT(x) || Rf_inherits(x, "spliced"));
+}
+bool is_spliced(SEXP x) {
+  return is_list(x) && Rf_inherits(x, "spliced");
+}
+
+static
+is_spliceable_t predicate_pointer(SEXP x) {
+  switch (TYPEOF(x)) {
+  case VECSXP:
+    if (Rf_inherits(x, "fn_pointer") && Rf_length(x) == 1) {
+      SEXP ptr = VECTOR_ELT(x, 0);
+      if (TYPEOF(ptr) == EXTPTRSXP)
+        return (is_spliceable_t) R_ExternalPtrAddrFn(ptr);
+    }
+    break;
+
+  case EXTPTRSXP:
+    return (is_spliceable_t) R_ExternalPtrAddrFn(x);
+
+  default:
+    break;
+  }
+
+  Rf_errorcall(R_NilValue, "`predicate` must be a closure or function pointer");
+  return NULL;
+}
+
+is_spliceable_t predicate_internal(SEXP x) {
+  static SEXP is_spliced_clo = NULL;
+  if (!is_spliced_clo)
+    is_spliced_clo = rlang_fun(Rf_install("is_spliced"));
+
+  static SEXP is_spliceable_clo = NULL;
+  if (!is_spliceable_clo)
+    is_spliceable_clo = rlang_fun(Rf_install("is_spliced_bare"));
+
+  if (x == is_spliced_clo)
+    return &is_spliced;
+  if (x == is_spliceable_clo)
+    return &is_spliced_bare;
+  return NULL;
+}
+
+// For unit tests:
+bool is_clevel_spliceable(SEXP x) {
+  return Rf_inherits(x, "foo");
+}
+
+// Emulate closure behaviour with global variable.
+SEXP clo_spliceable = NULL;
+bool is_spliceable_closure(SEXP x) {
+  if (!clo_spliceable)
+    Rf_error("Internal error while splicing");
+  SETCADR(clo_spliceable, x);
+
+  SEXP out = Rf_eval(clo_spliceable, R_GlobalEnv);
+  return as_bool(out);
+}
+
+
+// Export ------------------------------------------------------------
+
+SEXP rlang_squash_if(SEXP dots, SEXPTYPE kind, bool (*is_spliceable)(SEXP), int depth) {
+  switch (kind) {
+  case LGLSXP:
+  case INTSXP:
+  case REALSXP:
+  case CPLXSXP:
+  case STRSXP:
+  case RAWSXP:
+  case VECSXP:
+    return squash(kind, dots, is_spliceable, depth);
+  default:
+    Rf_errorcall(R_NilValue, "Splicing is not implemented for this type");
+    return R_NilValue;
+  }
+}
+SEXP rlang_squash_closure(SEXP dots, SEXPTYPE kind, SEXP pred, int depth) {
+  SEXP prev_pred = clo_spliceable;
+  clo_spliceable = PROTECT(Rf_lang2(pred, Rf_list2(R_NilValue, R_NilValue)));
+
+  SEXP out = rlang_squash_if(dots, kind, &is_spliceable_closure, depth);
+
+  clo_spliceable = prev_pred;
+  UNPROTECT(1);
+
+  return out;
+}
+SEXP rlang_squash(SEXP dots, SEXP type, SEXP pred, SEXP depth_) {
+  SEXPTYPE kind = Rf_str2type(CHAR(STRING_ELT(type, 0)));
+  int depth = Rf_asInteger(depth_);
+
+  is_spliceable_t is_spliceable;
+
+  if (TYPEOF(pred) == CLOSXP) {
+    is_spliceable = predicate_internal(pred);
+    if (is_spliceable)
+      return rlang_squash_if(dots, kind, is_spliceable, depth);
+    else
+      return rlang_squash_closure(dots, kind, pred, depth);
+  }
+
+  is_spliceable = predicate_pointer(pred);
+  return rlang_squash_if(dots, kind, is_spliceable, depth);
+}
diff --git a/src/symbol.c b/src/symbol.c
new file mode 100644
index 0000000..391884a
--- /dev/null
+++ b/src/symbol.c
@@ -0,0 +1,168 @@
+#define R_NO_REMAP
+#include <R.h>
+#include <Rinternals.h>
+#include <R_ext/GraphicsEngine.h>
+#include <stdbool.h>
+
+#define attribute_hidden
+
+// Interface functions ---------------------------------------------------------
+
+void copy_character(SEXP tgt, SEXP src, R_xlen_t len);
+R_xlen_t unescape_character_in_copy(SEXP tgt, SEXP src, R_xlen_t i);
+SEXP unescape_sexp(SEXP name);
+
+SEXP rlang_symbol(SEXP chr) {
+  SEXP string = STRING_ELT(chr, 0);
+  return Rf_install(Rf_translateChar(string));
+}
+
+SEXP rlang_symbol_to_character(SEXP chr) {
+  SEXP name = PRINTNAME(chr);
+  return Rf_ScalarString(unescape_sexp(name));
+}
+
+SEXP rlang_unescape_character(SEXP chr) {
+  R_xlen_t len = Rf_xlength(chr);
+  R_xlen_t i = unescape_character_in_copy(R_NilValue, chr, 0);
+  if (i == len) return chr;
+
+  SEXP ret = PROTECT(Rf_allocVector(STRSXP, len));
+  copy_character(ret, chr, i);
+  unescape_character_in_copy(ret, chr, i);
+  UNPROTECT(1);
+  return ret;
+}
+
+// Private functions -----------------------------------------------------------
+
+SEXP unescape_char_to_sexp(char* tmp);
+bool has_unicode_escape(const char* chr);
+int unescape_char(char* chr);
+int unescape_char_found(char* chr);
+int process_byte(char* tgt, char* const src, int* len_processed);
+bool has_codepoint(const char* src);
+bool is_hex(const char chr);
+
+void copy_character(SEXP tgt, SEXP src, R_xlen_t len) {
+  for (int i = 0; i < len; ++i) {
+    SET_STRING_ELT(tgt, i, STRING_ELT(src, i));
+  }
+}
+
+R_xlen_t attribute_hidden unescape_character_in_copy(SEXP tgt, SEXP src, R_xlen_t i) {
+  R_xlen_t len = Rf_length(src);
+  int dry_run = Rf_isNull(tgt);
+
+  for (; i < len; ++i) {
+    SEXP old_elt = STRING_ELT(src, i);
+    SEXP new_elt = unescape_sexp(old_elt);
+    if (dry_run) {
+      if (old_elt != new_elt) return i;
+    } else {
+      SET_STRING_ELT(tgt, i, new_elt);
+    }
+  }
+
+  return i;
+}
+
+SEXP attribute_hidden unescape_sexp(SEXP name) {
+  int ce = Rf_getCharCE(name);
+  const char* src = CHAR(name);
+  const char* re_enc = Rf_reEnc(src, ce, CE_UTF8, 0);
+
+  if (re_enc != src) {
+    // If the string has been copied, it's safe to use as buffer
+    char* tmp = (char*)re_enc;
+    return unescape_char_to_sexp(tmp);
+  } else {
+    // If not, we're in a UTF-8 locale
+    // Need to check first if the string has any UTF-8 escapes
+    if (!has_unicode_escape(src)) return name;
+    int orig_len = strlen(re_enc);
+    char tmp[orig_len + 1];
+    memcpy(tmp, re_enc, orig_len + 1);
+    return unescape_char_to_sexp(tmp);
+  }
+}
+
+SEXP attribute_hidden unescape_char_to_sexp(char* tmp) {
+  int len = unescape_char(tmp);
+  return Rf_mkCharLenCE(tmp, len, CE_UTF8);
+}
+
+bool attribute_hidden has_unicode_escape(const char* chr) {
+  while (*chr) {
+    if (has_codepoint(chr)) {
+      return true;
+    }
+    ++chr;
+  }
+
+  return false;
+}
+
+int attribute_hidden unescape_char(char* chr) {
+  int len = 0;
+
+  while (*chr) {
+    if (has_codepoint(chr)) {
+      return len + unescape_char_found(chr);
+    } else {
+      ++chr;
+      ++len;
+    }
+  }
+
+  return len;
+}
+
+int attribute_hidden unescape_char_found(char* chr) {
+  char* source = chr;
+  char* target = chr;
+  int len = 0;
+
+  while (*source) {
+    int len_processed;
+    int len_new = process_byte(target, source, &len_processed);
+    source += len_processed;
+    target += len_new;
+    len += len_new;
+  }
+
+  *target = 0;
+  return len;
+}
+
+int attribute_hidden process_byte(char* tgt, char* const src, int* len_processed) {
+  if (!has_codepoint(src)) {
+    // Copy only the first character (angle bracket or not), advance
+    *tgt = *src;
+    *len_processed = 1;
+    return 1;
+  }
+
+  unsigned int codepoint = strtoul(src + strlen("<U+"), NULL, 16);
+  *len_processed = strlen("<U+xxxx>");
+
+  // We have 8 bytes space, codepoints occupy less than that:
+  return (int)Rf_ucstoutf8(tgt, codepoint);
+}
+
+bool attribute_hidden has_codepoint(const char* src) {
+  if (src[0] != '<') return false;
+  if (src[1] != 'U') return false;
+  if (src[2] != '+') return false;
+  for (int i = 3; i < 7; ++i) {
+    if (!is_hex(src[i])) return false;
+  }
+  if (src[7] != '>') return false;
+  return true;
+}
+
+bool attribute_hidden is_hex(const char chr) {
+  if (chr >= '0' && chr <= '9') return true;
+  if (chr >= 'A' && chr <= 'F') return true;
+  return false;
+}
diff --git a/src/unquote.c b/src/unquote.c
new file mode 100644
index 0000000..623b620
--- /dev/null
+++ b/src/unquote.c
@@ -0,0 +1,180 @@
+#include <R.h>
+#include <Rdefines.h>
+#include <Rinternals.h>
+#include "utils.h"
+
+SEXP interp_walk(SEXP x, SEXP env, bool quosured);
+SEXP interp_arguments(SEXP x, SEXP env, bool quosured);
+
+
+int bang_level(SEXP x) {
+  if (!is_lang(x, "!"))
+    return 0;
+
+  SEXP arg = CDR(x);
+  if (TYPEOF(arg) == NILSXP || !is_lang(CAR(arg), "!"))
+    return 1;
+
+  arg = CDR(CAR(arg));
+  if (TYPEOF(arg) == NILSXP || !is_lang(CAR(arg), "!"))
+    return 2;
+
+  return 3;
+}
+
+int is_uq_sym(SEXP x) {
+  if (TYPEOF(x) != SYMSXP)
+    return 0;
+  else
+    return is_sym(x, "UQ") || is_sym(x, "UQE") || is_sym(x, "!!");
+}
+int is_splice_sym(SEXP x) {
+  if (TYPEOF(x) != SYMSXP)
+    return 0;
+  else
+    return is_sym(x, "UQS") || is_sym(x, "!!!");
+}
+
+SEXP replace_double_bang(SEXP x) {
+  int bang = bang_level(x);
+  if (bang == 3 || is_any_call(x, is_splice_sym))
+    Rf_errorcall(R_NilValue, "Can't splice at top-level");
+  else if (bang == 2) {
+    x = CADR(x);
+    SETCAR(x, Rf_install("UQ"));
+  }
+  return x;
+}
+SEXP replace_triple_bang(SEXP nxt, SEXP cur) {
+  if (bang_level(CAR(nxt)) == 3) {
+    nxt = CDAR(nxt);
+    nxt = CDAR(nxt);
+    SETCAR(CAR(nxt), Rf_install("UQS"));
+    SETCDR(nxt, CDR(CDR(cur)));
+  }
+  return nxt;
+}
+
+void unquote_check(SEXP x) {
+  if (CDR(x) == R_NilValue)
+    Rf_errorcall(R_NilValue, "`UQ()` must be called with an argument");
+}
+
+bool is_bare_formula(SEXP x) {
+  return TYPEOF(x) == LANGSXP
+      && CAR(x) == Rf_install("~")
+      && !Rf_inherits(x, "quosure");
+}
+SEXP unquote(SEXP x, SEXP env, SEXP uq_sym, bool quosured) {
+  if (is_sym(uq_sym, "!!"))
+    uq_sym = Rf_install("UQE");
+
+  // Inline unquote function before evaluation because even `::` might
+  // not be available in interpolation environment.
+  SEXP uq_fun = rlang_fun(uq_sym);
+
+  PROTECT_INDEX ipx;
+  PROTECT_WITH_INDEX(uq_fun, &ipx);
+  REPROTECT(uq_fun = Rf_lang2(uq_fun, x), ipx);
+
+  SEXP unquoted;
+  REPROTECT(unquoted = Rf_eval(uq_fun, env), ipx);
+
+  if (!quosured && is_symbolic(unquoted))
+    unquoted = lang2(Rf_install("quote"), unquoted);
+
+  UNPROTECT(1);
+  return unquoted;
+}
+SEXP unquote_prefixed_uq(SEXP x, SEXP env, bool quosured) {
+  SEXP uq_sym = CADR(CDAR(x));
+  SEXP unquoted = PROTECT(unquote(CADR(x), env, uq_sym, quosured));
+  SETCDR(CDAR(x), CONS(unquoted, R_NilValue));
+  UNPROTECT(1);
+
+  if (is_rlang_prefixed(x, NULL))
+    x = CADR(CDAR(x));
+  else
+    x = CAR(x);
+  return x;
+}
+SEXP splice_nxt(SEXP cur, SEXP nxt, SEXP env) {
+  static SEXP uqs_fun;
+  if (!uqs_fun)
+    uqs_fun = rlang_fun(Rf_install("UQS"));
+  SETCAR(CAR(nxt), uqs_fun);
+
+  // UQS() does error checking and returns a pair list
+  SEXP args_lsp = PROTECT(Rf_eval(CAR(nxt), env));
+
+  if (args_lsp == R_NilValue) {
+    SETCDR(cur, CDR(nxt));
+  } else {
+    // Insert args_lsp into existing pairlist of args
+    SEXP last_arg = last_cons(args_lsp);
+    SETCDR(last_arg, CDR(nxt));
+    SETCDR(cur, args_lsp);
+  }
+
+  UNPROTECT(1);
+  return cur;
+}
+SEXP splice_value_nxt(SEXP cur, SEXP nxt, SEXP env) {
+  SETCAR(CAR(nxt), rlang_fun(Rf_install("splice")));
+  SETCAR(nxt, Rf_eval(CAR(nxt), env));
+  return cur;
+}
+
+SEXP interp_walk(SEXP x, SEXP env, bool quosured)  {
+  if (!Rf_isLanguage(x))
+    return x;
+
+  PROTECT(x);
+  x = replace_double_bang(x);
+
+  if (is_prefixed_call(x, is_uq_sym)) {
+    unquote_check(x);
+    x = unquote_prefixed_uq(x, env, quosured);
+  } else if (is_any_call(x, is_uq_sym)) {
+    unquote_check(x);
+    SEXP uq_sym = CAR(x);
+    x = unquote(CADR(x), env, uq_sym, quosured);
+  } else {
+    x = interp_arguments(x, env, quosured);
+  }
+
+  UNPROTECT(1);
+  return x;
+}
+
+SEXP interp_arguments(SEXP x, SEXP env, bool quosured) {
+  for(SEXP cur = x; cur != R_NilValue; cur = CDR(cur)) {
+    SETCAR(cur, interp_walk(CAR(cur), env, quosured));
+
+    SEXP nxt = CDR(cur);
+    nxt = replace_triple_bang(nxt, cur);
+    if (is_rlang_call(CAR(nxt), is_splice_sym)) {
+      if (quosured) {
+        cur = splice_nxt(cur, nxt, env);
+        cur = nxt; // Don't interpolate unquoted stuff
+      } else {
+        cur = splice_value_nxt(cur, nxt, env);
+      }
+    }
+  }
+
+  return x;
+}
+
+SEXP rlang_interp(SEXP x, SEXP env, SEXP quosured) {
+  if (!Rf_isLanguage(x))
+    return x;
+  if (!Rf_isEnvironment(env))
+    Rf_errorcall(R_NilValue, "`env` must be an environment");
+
+  x = PROTECT(Rf_duplicate(x));
+  x = interp_walk(x, env, as_bool(quosured));
+
+  UNPROTECT(1);
+  return x;
+}
diff --git a/src/utils.c b/src/utils.c
new file mode 100644
index 0000000..ad16ba5
--- /dev/null
+++ b/src/utils.c
@@ -0,0 +1,218 @@
+#define R_NO_REMAP
+#include <R.h>
+#include <Rinternals.h>
+#include <stdbool.h>
+
+bool is_character(SEXP x) {
+  return TYPEOF(x) == STRSXP;
+}
+bool is_str_empty(SEXP str) {
+  const char* c_str = CHAR(str);
+  return strcmp(c_str, "") == 0;
+}
+
+SEXP names(SEXP x) {
+  return Rf_getAttrib(x, R_NamesSymbol);
+}
+bool has_name_at(SEXP x, R_len_t i) {
+  SEXP nms = names(x);
+  return is_character(nms) && !is_str_empty(STRING_ELT(nms, i));
+}
+SEXP set_names(SEXP x, SEXP nms) {
+  return Rf_setAttrib(x, R_NamesSymbol, nms);
+}
+
+bool is_object(SEXP x) {
+  return OBJECT(x) != 0;
+}
+bool is_atomic(SEXP x) {
+  switch(TYPEOF(x)) {
+  case LGLSXP:
+  case INTSXP:
+  case REALSXP:
+  case CPLXSXP:
+  case STRSXP:
+  case RAWSXP:
+    return true;
+  default:
+    return false;
+  }
+}
+bool is_scalar_atomic(SEXP x) {
+  return Rf_length(x) == 1 && is_atomic(x);
+}
+bool is_list(SEXP x) {
+  return TYPEOF(x) == VECSXP;
+}
+bool is_vector(SEXP x) {
+  switch(TYPEOF(x)) {
+  case LGLSXP:
+  case INTSXP:
+  case REALSXP:
+  case CPLXSXP:
+  case STRSXP:
+  case RAWSXP:
+  case VECSXP:
+    return true;
+  default:
+    return false;
+  }
+}
+bool is_null(SEXP x) {
+  return x == R_NilValue;
+}
+
+int is_sym(SEXP x, const char* string) {
+  if (TYPEOF(x) != SYMSXP)
+    return false;
+  else
+    return strcmp(CHAR(PRINTNAME(x)), string) == 0;
+}
+
+int is_symbolic(SEXP x) {
+  return TYPEOF(x) == LANGSXP || TYPEOF(x) == SYMSXP;
+}
+
+bool is_lang(SEXP x, const char* f) {
+  if (!is_symbolic(x) && TYPEOF(x) != LISTSXP)
+    return false;
+
+  SEXP fun = CAR(x);
+  return is_sym(fun, f);
+}
+
+int is_prefixed_call(SEXP x, int (*sym_predicate)(SEXP)) {
+  if (TYPEOF(x) != LANGSXP)
+    return 0;
+
+  SEXP head = CAR(x);
+  if (!(is_lang(head, "$") ||
+        is_lang(head, "@") ||
+        is_lang(head, "::") ||
+        is_lang(head, ":::")))
+    return 0;
+
+  if (sym_predicate == NULL)
+    return 1;
+
+  SEXP args = CDAR(x);
+  SEXP sym = CADR(args);
+  return sym_predicate(sym);
+}
+
+int is_any_call(SEXP x, int (*sym_predicate)(SEXP)) {
+  if (TYPEOF(x) != LANGSXP)
+    return false;
+  else
+    return sym_predicate(CAR(x)) || is_prefixed_call(x, sym_predicate);
+}
+
+int is_rlang_prefixed(SEXP x, int (*sym_predicate)(SEXP)) {
+  if (TYPEOF(x) != LANGSXP)
+    return 0;
+
+  if (!is_lang(CAR(x), "::"))
+    return 0;
+
+  SEXP args = CDAR(x);
+  SEXP ns_sym = CAR(args);
+  if (!is_sym(ns_sym, "rlang"))
+    return 0;
+
+  if (sym_predicate) {
+    SEXP sym = CADR(args);
+    return sym_predicate(sym);
+  }
+
+  return 1;
+}
+int is_rlang_call(SEXP x, int (*sym_predicate)(SEXP)) {
+  if (TYPEOF(x) != LANGSXP)
+    return false;
+  else
+    return sym_predicate(CAR(x)) || is_rlang_prefixed(x, sym_predicate);
+}
+
+SEXP last_cons(SEXP x) {
+  while(CDR(x) != R_NilValue)
+    x = CDR(x);
+  return x;
+}
+
+SEXP rlang_length(SEXP x) {
+  return Rf_ScalarInteger(Rf_length(x));
+}
+
+int is_true(SEXP x) {
+  if (TYPEOF(x) != LGLSXP || Rf_length(x) != 1)
+    Rf_errorcall(R_NilValue, "`x` must be a boolean");
+
+  int value = LOGICAL(x)[0];
+  return value == NA_LOGICAL ? 0 : value;
+}
+
+// Formulas --------------------------------------------------------------------
+
+SEXP make_formula1(SEXP rhs, SEXP env) {
+  SEXP f = PROTECT(Rf_lang2(Rf_install("~"), rhs));
+  Rf_setAttrib(f, R_ClassSymbol, Rf_mkString("formula"));
+  Rf_setAttrib(f, Rf_install(".Environment"), env);
+
+  UNPROTECT(1);
+  return f;
+}
+
+SEXP rlang_fun(SEXP sym) {
+  SEXP prefixed_sym = PROTECT(Rf_lang3(Rf_install("::"), Rf_install("rlang"), sym));
+  SEXP fun = Rf_eval(prefixed_sym, R_BaseEnv);
+  UNPROTECT(1);
+  return fun;
+}
+
+const char* kind_c_str(SEXPTYPE kind) {
+  SEXP str = Rf_type2str(kind);
+  return CHAR(str);
+}
+
+bool is_empty(SEXP x) {
+  return Rf_length(x) == 0;
+}
+
+bool as_bool(SEXP x) {
+  if (TYPEOF(x) != LGLSXP && Rf_length(x) != 1)
+    Rf_errorcall(R_NilValue, "Expected a scalar logical");
+   int* xp = (int*) LOGICAL(x);
+   return *xp;
+}
+
+SEXP rlang_new_dictionary(SEXP x, SEXP lookup_msg, SEXP read_only) {
+  SEXP dict = PROTECT(Rf_allocVector(VECSXP, 3));
+
+  SET_VECTOR_ELT(dict, 0, x);
+  SET_VECTOR_ELT(dict, 2, read_only);
+
+  if (lookup_msg == R_NilValue)
+    SET_VECTOR_ELT(dict, 1, Rf_mkString("Object `%s` not found in data"));
+  else
+    SET_VECTOR_ELT(dict, 1, lookup_msg);
+
+  static SEXP nms = NULL;
+  if (!nms) {
+    nms = Rf_allocVector(STRSXP, 3);
+    R_PreserveObject(nms);
+    SET_STRING_ELT(nms, 0, Rf_mkChar("src"));
+    SET_STRING_ELT(nms, 1, Rf_mkChar("lookup_msg"));
+    SET_STRING_ELT(nms, 2, Rf_mkChar("read_only"));
+  }
+  static SEXP s3 = NULL;
+  if (!s3) {
+    s3 = Rf_mkString("dictionary");
+    R_PreserveObject(s3);
+  }
+
+  Rf_setAttrib(dict, R_ClassSymbol, s3);
+  Rf_setAttrib(dict, R_NamesSymbol, nms);
+
+  UNPROTECT(1);
+  return dict;
+}
diff --git a/src/utils.h b/src/utils.h
new file mode 100644
index 0000000..18dc382
--- /dev/null
+++ b/src/utils.h
@@ -0,0 +1,40 @@
+#ifndef RLANG_UTILS_H
+#define RLANG_UTILS_H
+
+#define R_NO_REMAP
+#include <stdbool.h>
+#include <R.h>
+#include <Rinternals.h>
+
+#include "formula.h"
+
+
+bool is_lazy_load(SEXP x);
+bool is_lang(SEXP x, const char* f);
+SEXP last_cons(SEXP x);
+SEXP make_formula1(SEXP rhs, SEXP env);
+SEXP rlang_fun(SEXP sym);
+int is_symbolic(SEXP x);
+int is_true(SEXP x);
+int is_sym(SEXP sym, const char* string);
+int is_rlang_prefixed(SEXP x, int (*sym_predicate)(SEXP));
+int is_any_call(SEXP x, int (*sym_predicate)(SEXP));
+int is_prefixed_call(SEXP x, int (*sym_predicate)(SEXP));
+int is_rlang_call(SEXP x, int (*sym_predicate)(SEXP));
+bool is_character(SEXP x);
+SEXP names(SEXP x);
+bool has_name_at(SEXP x, R_len_t i);
+bool is_str_empty(SEXP str);
+bool is_object(SEXP x);
+bool is_atomic(SEXP x);
+bool is_list(SEXP x);
+SEXP set_names(SEXP x, SEXP nms);
+bool is_scalar_atomic(SEXP x);
+const char* kind_c_str(SEXPTYPE kind);
+bool is_empty(SEXP x);
+bool is_vector(SEXP x);
+bool is_null(SEXP x);
+bool as_bool(SEXP x);
+
+
+#endif
diff --git a/src/vector.h b/src/vector.h
new file mode 100644
index 0000000..d9c2593
--- /dev/null
+++ b/src/vector.h
@@ -0,0 +1,126 @@
+#define R_NO_REMAP
+#include <Rinternals.h>
+#include <stdbool.h>
+
+
+// In particular, this returns 1 for environments
+R_len_t vec_length(SEXP x) {
+  switch (TYPEOF(x)) {
+  case LGLSXP:
+  case INTSXP:
+  case REALSXP:
+  case CPLXSXP:
+  case STRSXP:
+  case RAWSXP:
+  case VECSXP:
+    return Rf_length(x);
+  case NILSXP:
+    return 0;
+  default:
+    return 1;
+  }
+}
+
+
+// Copy --------------------------------------------------------------
+
+void vec_copy_n(SEXP src, R_len_t n, SEXP dest,
+                R_len_t offset_dest,
+                R_len_t offset_src) {
+  switch (TYPEOF(dest)) {
+  case LGLSXP: {
+    int* src_data = LOGICAL(src);
+    int* dest_data = LOGICAL(dest);
+    for (R_len_t i = 0; i != n; ++i)
+      dest_data[i + offset_dest] = src_data[i + offset_src];
+    break;
+  }
+  case INTSXP: {
+    int* src_data = INTEGER(src);
+    int* dest_data = INTEGER(dest);
+    for (R_len_t i = 0; i != n; ++i)
+      dest_data[i + offset_dest] = src_data[i + offset_src];
+    break;
+  }
+  case REALSXP: {
+    double* src_data = REAL(src);
+    double* dest_data = REAL(dest);
+    for (R_len_t i = 0; i != n; ++i)
+      dest_data[i + offset_dest] = src_data[i + offset_src];
+    break;
+  }
+  case CPLXSXP: {
+    Rcomplex* src_data = COMPLEX(src);
+    Rcomplex* dest_data = COMPLEX(dest);
+    for (R_len_t i = 0; i != n; ++i)
+      dest_data[i + offset_dest] = src_data[i + offset_src];
+    break;
+  }
+  case RAWSXP: {
+    Rbyte* src_data = RAW(src);
+    Rbyte* dest_data = RAW(dest);
+    for (R_len_t i = 0; i != n; ++i)
+      dest_data[i + offset_dest] = src_data[i + offset_src];
+    break;
+  }
+  case STRSXP: {
+    SEXP elt;
+    for (R_len_t i = 0; i != n; ++i) {
+      elt = STRING_ELT(src, i + offset_src);
+      SET_STRING_ELT(dest, i + offset_dest, elt);
+    }
+    break;
+  }
+  case VECSXP: {
+    SEXP elt;
+    for (R_len_t i = 0; i != n; ++i) {
+      elt = VECTOR_ELT(src, i + offset_src);
+      SET_VECTOR_ELT(dest, i + offset_dest, elt);
+    }
+    break;
+  }
+  default:
+    Rf_errorcall(R_NilValue, "Copy requires vectors");
+  }
+}
+
+
+// Coercion ----------------------------------------------------------
+
+SEXP namespace_rlang_sym(SEXP sym) {
+  static SEXP rlang_sym = NULL;
+  if (!rlang_sym)
+    rlang_sym = Rf_install("rlang");
+  return(Rf_lang3(Rf_install("::"), rlang_sym, sym));
+}
+
+SEXP vec_coercer_sym(SEXP dest) {
+  switch(TYPEOF(dest)) {
+  case LGLSXP: return namespace_rlang_sym(Rf_install("as_logical"));
+  case INTSXP: return namespace_rlang_sym(Rf_install("as_integer"));
+  case REALSXP: return namespace_rlang_sym(Rf_install("as_double"));
+  case CPLXSXP: return namespace_rlang_sym(Rf_install("as_complex"));
+  case STRSXP: return namespace_rlang_sym(Rf_install("as_character"));
+  case RAWSXP: return namespace_rlang_sym(Rf_install("as_bytes"));
+  default: Rf_errorcall(R_NilValue, "No coercion implemented for `%s`", Rf_type2str(TYPEOF(dest)));
+  }
+}
+
+void vec_copy_coerce_n(SEXP src, R_len_t n, SEXP dest,
+                       R_len_t offset_dest,
+                       R_len_t offset_src) {
+  if (TYPEOF(src) != TYPEOF(dest)) {
+    if (OBJECT(src))
+      Rf_errorcall(R_NilValue, "Can't splice S3 objects");
+    // FIXME: This callbacks to rlang R coercers with an extra copy.
+    PROTECT_INDEX ipx;
+    SEXP call, coerced;
+    PROTECT_WITH_INDEX(call = vec_coercer_sym(dest), &ipx);
+    REPROTECT(call = Rf_lang2(call, src), ipx);
+    REPROTECT(coerced = Rf_eval(call, R_BaseEnv), ipx);
+    vec_copy_n(coerced, n, dest, offset_dest, offset_src);
+    UNPROTECT(1);
+  } else {
+    vec_copy_n(src, n, dest, offset_dest, offset_src);
+  }
+}
diff --git a/tests/testthat.R b/tests/testthat.R
new file mode 100644
index 0000000..70f8b03
--- /dev/null
+++ b/tests/testthat.R
@@ -0,0 +1,4 @@
+library("testthat")
+library("rlang")
+
+test_check("rlang")
diff --git a/tests/testthat/helper-capture.R b/tests/testthat/helper-capture.R
new file mode 100644
index 0000000..10a06c7
--- /dev/null
+++ b/tests/testthat/helper-capture.R
@@ -0,0 +1,10 @@
+
+named <- function(x) {
+  set_names(x, names2(x))
+}
+named_list <- function(...) {
+  named(list(...))
+}
+quos_list <- function(...) {
+  set_attrs(named_list(...), class = "quosures")
+}
diff --git a/tests/testthat/helper-locale.R b/tests/testthat/helper-locale.R
new file mode 100644
index 0000000..fd0ece7
--- /dev/null
+++ b/tests/testthat/helper-locale.R
@@ -0,0 +1,46 @@
+get_lang_strings <- function() {
+  lang_strings <- c(
+    de = "Gl\u00fcck",
+    cn = "\u5e78\u798f",
+    ru = "\u0441\u0447\u0430\u0441\u0442\u044c\u0435",
+    ko = "\ud589\ubcf5"
+  )
+
+  native_lang_strings <- enc2native(lang_strings)
+
+  same <- (lang_strings == native_lang_strings)
+
+  list(
+    same = lang_strings[same],
+    different = lang_strings[!same]
+  )
+}
+
+get_native_lang_string <- function() {
+  lang_strings <- get_lang_strings()
+  if (length(lang_strings$same) == 0) testthat::skip("No native language string available")
+  lang_strings$same[[1L]]
+}
+
+get_alien_lang_string <- function() {
+  lang_strings <- get_lang_strings()
+  if (length(lang_strings$different) == 0) testthat::skip("No alien language string available")
+  lang_strings$different[[1L]]
+}
+
+with_non_utf8_locale <- function(code) {
+  old_locale <- mut_non_utf8_locale()
+  on.exit(mut_ctype(old_locale), add = TRUE)
+  code
+}
+
+mut_non_utf8_locale <- function() {
+  if (.Platform$OS.type == "windows") return(NULL)
+  tryCatch(
+    locale <- mut_ctype("en_US.ISO8859-1"),
+    warning = function(e) {
+      testthat::skip("Cannot set latin-1 locale")
+    }
+  )
+  locale
+}
diff --git a/tests/testthat/helper-stack.R b/tests/testthat/helper-stack.R
new file mode 100644
index 0000000..5fecb2d
--- /dev/null
+++ b/tests/testthat/helper-stack.R
@@ -0,0 +1,22 @@
+
+fixup_calls <- function(x) {
+  cur_pos <- sys.nframe() - 1
+  x[seq(n+1, length(x))]
+}
+
+fixup_ctxt_depth <- function(x) {
+  x - (sys.nframe() - 1)
+}
+fixup_call_depth <- function(x) {
+  x - (call_depth() - 1)
+}
+
+fixup_call_trail <- function(trail) {
+  eval_callers <- ctxt_stack_callers()
+  cur_trail <- trail_make(eval_callers)
+  cur_pos <- eval_callers[1]
+
+  indices <- seq(1, length(trail) - length(cur_trail))
+  trail <- trail[indices]
+  trail - cur_pos
+}
diff --git a/tests/testthat/test-arg.R b/tests/testthat/test-arg.R
new file mode 100644
index 0000000..d9efe74
--- /dev/null
+++ b/tests/testthat/test-arg.R
@@ -0,0 +1,37 @@
+context("arg")
+
+test_that("matches arg", {
+  myarg <- c("foo", "baz")
+  expect_identical(arg_match(myarg, c("bar", "foo")), "foo")
+  expect_error(
+    regex = "`myarg` should be one of: \"bar\" or \"baz\"",
+    arg_match(myarg, c("bar", "baz"))
+  )
+})
+
+test_that("informative error message on partial match", {
+  myarg <- "f"
+  expect_error(
+    regex = "Did you mean \"foo\"?",
+    arg_match(myarg, c("bar", "foo"))
+  )
+})
+
+test_that("gets choices from function", {
+  fn <- function(myarg = c("bar", "foo")) arg_match(myarg)
+  expect_error(fn("f"), "Did you mean \"foo\"?")
+  expect_identical(fn("foo"), "foo")
+})
+
+test_that("is_missing() works with symbols", {
+  x <- missing_arg()
+  expect_true(is_missing(x))
+})
+
+test_that("is_missing() works with non-symbols", {
+  expect_true(is_missing(missing_arg()))
+
+  l <- list(missing_arg())
+  expect_true(is_missing(l[[1]]))
+  expect_error(missing(l[[1]]), "invalid use")
+})
diff --git a/tests/testthat/test-attr.R b/tests/testthat/test-attr.R
new file mode 100644
index 0000000..72aba5f
--- /dev/null
+++ b/tests/testthat/test-attr.R
@@ -0,0 +1,51 @@
+context("attributes")
+
+test_that("names2() takes care of missing values", {
+  x <- set_names(1:3, c("a", NA, "b"))
+  expect_identical(names2(x), c("a", "", "b"))
+})
+
+test_that("names2() fails for environments", {
+  expect_error(names2(env()), "Use env_names() for environments.", fixed = TRUE)
+})
+
+test_that("set_attrs() fails with uncopyable types", {
+  expect_error(set_attrs(env(), foo = "bar"), "is uncopyable")
+})
+
+test_that("mut_attrs() fails with copyable types", {
+  expect_error(mut_attrs(letters, foo = "bar"), "is copyable")
+})
+
+test_that("set_attrs() called with NULL zaps attributes", {
+  obj <- set_attrs(letters, foo = "bar")
+  expect_identical(set_attrs(obj, NULL), letters)
+})
+
+test_that("set_attrs() does not zap old attributes", {
+  obj <- set_attrs(letters, foo = "bar")
+  obj <- set_attrs(obj, baz = "bam")
+  expect_named(attributes(obj), c("foo", "baz"))
+})
+
+test_that("inputs must be valid", {
+  expect_error(set_names(environment()), "must be a vector")
+  expect_error(set_names(1:10, letters[1:4]), "same length")
+})
+
+test_that("can supply vector or ...", {
+  expect_named(set_names(1:2, c("a", "b")), c("a", "b"))
+  expect_named(set_names(1:2, "a", "b"), c("a", "b"))
+  expect_named(set_names(1:2, list("a"), list("b")), c("a", "b"))
+})
+
+test_that("can supply function/formula to rename", {
+  x <- c(a = 1, b = 2)
+  expect_named(set_names(x, toupper), c("A", "B"))
+  expect_named(set_names(x, ~ toupper(.)), c("A", "B"))
+  expect_named(set_names(x, paste, "foo"), c("a foo", "b foo"))
+})
+
+test_that("set_names() zaps names", {
+  expect_null(names(set_names(mtcars, NULL)))
+})
diff --git a/tests/testthat/test-compat.R b/tests/testthat/test-compat.R
new file mode 100644
index 0000000..25c924f
--- /dev/null
+++ b/tests/testthat/test-compat.R
@@ -0,0 +1,90 @@
+context("compat")
+
+test_that("names() dispatches on environment", {
+  env <- child_env(NULL, foo = "foo", bar = "bar")
+  expect_identical(sort(names(env)), c("bar", "foo"))
+})
+
+test_that("lazy objects are converted to tidy quotes", {
+  env <- child_env(get_env())
+
+  lazy <- structure(list(expr = quote(foo(bar)), env = env), class = "lazy")
+  expect_identical(compat_lazy(lazy), new_quosure(quote(foo(bar)), env))
+
+  lazy_str <- "foo(bar)"
+  expect_identical(compat_lazy(lazy_str), quo(foo(bar)))
+
+  lazy_lang <- quote(foo(bar))
+  expect_identical(compat_lazy(lazy_lang), quo(foo(bar)))
+
+  lazy_sym <- quote(foo)
+  expect_identical(compat_lazy(lazy_sym), quo(foo))
+})
+
+test_that("lazy_dots objects are converted to tidy quotes", {
+  env <- child_env(get_env())
+
+  lazy_dots <- structure(class = "lazy_dots", list(
+    lazy = structure(list(expr = quote(foo(bar)), env = env), class = "lazy"),
+    lazy_lang = quote(foo(bar))
+  ))
+
+  expected <- list(
+    lazy = new_quosure(quote(foo(bar)), env),
+    lazy_lang = quo(foo(bar)),
+    quo(foo(bar))
+  )
+
+  expect_identical(compat_lazy_dots(lazy_dots, get_env(), "foo(bar)"), expected)
+})
+
+test_that("unnamed lazy_dots are given default names", {
+  lazy_dots <- structure(class = "lazy_dots", list(
+    "foo(baz)",
+    quote(foo(bar))
+  ))
+
+  expected <- list(
+    `foo(baz)` = quo(foo(baz)),
+    `foo(bar)` = quo(foo(bar)),
+    foobarbaz = quo(foo(barbaz))
+  )
+  dots <- compat_lazy_dots(lazy_dots, get_env(), foobarbaz = "foo(barbaz)", .named = TRUE)
+
+  expect_identical(dots, expected)
+})
+
+test_that("compat_lazy() handles missing arguments", {
+  expect_identical(compat_lazy(), quo())
+})
+
+test_that("compat_lazy_dots() takes lazy objects", {
+  lazy <- set_attrs(list(expr = quote(foo), env = empty_env()), class = "lazy")
+  expect_identical(compat_lazy_dots(lazy), named_list(new_quosure(quote(foo), empty_env())))
+})
+
+test_that("compat_lazy_dots() takes symbolic objects", {
+  expect_identical(compat_lazy_dots(quote(foo), empty_env()), named_list(new_quosure(quote(foo), empty_env())))
+  expect_identical(compat_lazy_dots(quote(foo(bar)), empty_env()), named_list(new_quosure(quote(foo(bar)), empty_env())))
+})
+
+test_that("compat_lazy() demotes character vectors to strings", {
+  expect_identical(compat_lazy_dots(NULL, get_env(), c("foo", "bar")), named_list(as_quosure(~foo)))
+})
+
+test_that("compat_lazy() handles numeric vectors", {
+  expect_identical(compat_lazy_dots(NULL, get_env(), NA_real_), named_list(set_env(quo(NA_real_))))
+  expect_warning(expect_identical(compat_lazy_dots(NULL, get_env(), 1:3), named_list(set_env(quo(1L)))), "Truncating vector")
+})
+
+test_that("compat_lazy() handles bare formulas", {
+  expect_identical(compat_lazy(~foo), quo(foo))
+  expect_identical(compat_lazy_dots(~foo), named_list(quo(foo)))
+})
+
+test_that("trimws() trims", {
+  x <- "  foo.  "
+  expect_identical(trimws(x), "foo.")
+  expect_identical(trimws(x, "l"), "foo.  ")
+  expect_identical(trimws(x, "r"), "  foo.")
+})
diff --git a/tests/testthat/test-conditions.R b/tests/testthat/test-conditions.R
new file mode 100644
index 0000000..fa9b8ea
--- /dev/null
+++ b/tests/testthat/test-conditions.R
@@ -0,0 +1,123 @@
+context("conditions") # ----------------------------------------------
+
+test_that("new_cnd() constructs all fields", {
+  cond <- new_cnd("cnd_class", .msg = "cnd message")
+  expect_equal(conditionMessage(cond), "cnd message")
+  expect_is(cond, "cnd_class")
+})
+
+test_that("new_cnd() throws with unnamed fields", {
+  expect_error(new_cnd("class", "msg", 10), "must have named data fields")
+})
+
+test_that("cnd_signal() creates muffle restarts", {
+  withCallingHandlers(cnd_signal("foo", muffle = TRUE),
+    foo = function(c) {
+      expect_true(rst_exists("muffle"))
+      expect_is(c, "mufflable")
+    }
+  )
+})
+
+test_that("cnd_signal() include call info", {
+  cnd <- new_cnd("cnd", .call = quote(foo(bar)))
+  fn <- function(...) cnd_signal(cnd, .call = call)
+
+  call <- FALSE
+  with_handlers(fn(foo(bar)), cnd = inplace(function(c) {
+    expect_identical(c$.call, quote(fn(foo(bar))))
+    expect_null(conditionCall(c))
+  }))
+
+  call <- TRUE
+  with_handlers(fn(foo(bar)), cnd = inplace(function(c) {
+    expect_identical(conditionCall(c), quote(fn(foo(bar))))
+  }))
+})
+
+test_that("abort() includes call info", {
+  fn <- function(...) abort("abort", "cnd", call = call)
+
+  call <- FALSE
+  with_handlers(fn(foo(bar)), cnd = exiting(function(c) {
+    expect_identical(c$.call, quote(fn(foo(bar))))
+    expect_null(conditionCall(c))
+  }))
+
+  call <- TRUE
+  with_handlers(fn(foo(bar)), cnd = exiting(function(c) {
+    expect_identical(conditionCall(c), quote(fn(foo(bar))))
+  }))
+})
+
+test_that("error when msg is not a string", {
+  expect_error(warn(letters), "must be a string")
+})
+
+
+context("restarts") # ------------------------------------------------
+
+test_that("restarts are established", {
+  with_restarts(foo = function() {}, expect_true(rst_exists("foo")))
+})
+
+
+context("handlers") # ------------------------------------------------
+
+test_that("Local handlers can muffle mufflable conditions", {
+  signal_mufflable <- function() cnd_signal("foo", with_muffle = TRUE)
+  muffling_handler <- inplace(function(c) NULL, muffle = TRUE)
+  non_muffling_handler <- inplace(function(c) NULL, muffle = FALSE)
+
+  expect_error(regexp = "not muffled!",
+    withCallingHandlers(foo = function(c) stop("not muffled!"), {
+      withCallingHandlers(foo = non_muffling_handler,
+        signal_mufflable())
+    }))
+
+  expect_error(regexp = NA,
+    withCallingHandlers(foo = function(c) stop("not muffled!"), {
+      withCallingHandlers(foo = muffling_handler,
+        signal_mufflable())
+    }))
+})
+
+test_that("with_handlers() establishes inplace and exiting handlers", {
+  handlers <- list(
+    error = exiting(function(c) "caught error"),
+    message = exiting(function(c) "caught message"),
+    warning = inplace(function(c) "warning"),
+    foobar = inplace(function(c) cat("foobar"))
+  )
+
+  expect_equal(with_handlers(identity(letters), splice(handlers)), identity(letters))
+  expect_equal(with_handlers(stop(letters), splice(handlers)), "caught error")
+  expect_equal(with_handlers(message(letters), splice(handlers)), "caught message")
+  expect_warning(expect_equal(with_handlers({ warning("warn!"); letters }, splice(handlers)), identity(letters)), "warn!")
+  expect_output(expect_equal(with_handlers({ cnd_signal("foobar"); letters }, splice(handlers)), identity(letters)), "foobar")
+})
+
+test_that("set_names2() fills in empty names", {
+  chr <- c("a", b = "B", "c")
+  expect_equal(set_names2(chr), c(a = "a", b = "B", c = "c"))
+})
+
+test_that("restarting() handlers pass along all requested arguments", {
+  signal_foo <- function() {
+    cnd_signal("foo", foo_field = "foo_field")
+  }
+  fn <- function() {
+    with_handlers(signal_foo(), foo = restart_handler)
+  }
+
+  restart_handler <- restarting("rst_foo",
+    a = "a",
+    splice(list(b = "b")),
+    .fields = c(field_arg = "foo_field")
+  )
+
+  rst_foo <- function(a, b, field_arg) {
+    expect_equal(list(a, b, field_arg), list("a", "b", "foo_field"))
+  }
+  with_restarts(fn(), rst_foo = rst_foo)
+})
diff --git a/tests/testthat/test-dictionary.R b/tests/testthat/test-dictionary.R
new file mode 100644
index 0000000..7fa7fae
--- /dev/null
+++ b/tests/testthat/test-dictionary.R
@@ -0,0 +1,60 @@
+context("dictionary")
+
+test_that("can't access non-existent list members", {
+  x1 <- list(y = 1)
+  x2 <- as_dictionary(x1)
+
+  expect_equal(x2$y, 1)
+  expect_error(x2$z, "Object `z` not found in data")
+  expect_error(x2[["z"]], "Object `z` not found in data")
+})
+
+test_that("can't access non-existent environment components", {
+  x1 <- list2env(list(y = 1))
+  x2 <- as_dictionary(x1)
+
+  expect_equal(x2$y, 1)
+  expect_error(x2$z, "Object `z` not found in environment")
+  expect_error(x2[["z"]], "Object `z` not found in environment")
+})
+
+test_that("can't use non-character vectors", {
+  x <- as_dictionary(list(y = 1))
+
+  expect_error(x[[1]], "subset with a string")
+  expect_error(x[[c("a", "b")]], "subset with a string")
+})
+
+test_that("subsetting .data pronoun fails when not supplied", {
+  f <- quo(.data$foo)
+  expect_error(eval_tidy(f), "not found in data")
+})
+
+test_that("names() and length() methods", {
+  x <- as_dictionary(mtcars)
+  expect_identical(names(x), names(mtcars))
+  expect_identical(length(x), length(mtcars))
+})
+
+test_that("can replace elements of dictionaries", {
+  expect_src <- function(dict, expected) {
+    src <- .subset2(dict, "src")
+    expect_identical(src, expected)
+  }
+
+  x <- as_dictionary(list(foo = "bar"))
+
+  x$foo <- "baz"
+  expect_src(x, list(foo = "baz"))
+
+  x[["bar"]] <- "bam"
+  expect_src(x, list(foo = "baz", bar = "bam"))
+
+  expect_error(x[[3]] <- NULL, "Must subset with a string")
+})
+
+test_that("cannot replace elements of read-only dictionaries", {
+  x <- as_dictionary(list(foo = "bar"), read_only = TRUE)
+  expect_error(x$foo <- "baz", "read-only dictionary")
+  expect_error(x[["foo"]] <- "baz", "read-only dictionary")
+})
diff --git a/tests/testthat/test-dots.R b/tests/testthat/test-dots.R
new file mode 100644
index 0000000..68ea2c7
--- /dev/null
+++ b/tests/testthat/test-dots.R
@@ -0,0 +1,61 @@
+context("dots")
+
+test_that("dots are retrieved from arguments", {
+  fn <- function(f, ...) f(...)
+  expect_identical(fn(exprs), named_list())
+
+  g <- function(f, ...) fn(f, ...)
+  expect_identical(g(exprs, a = 1, foo = bar), list(a = 1, foo = quote(bar)))
+})
+
+test_that("exprs() captures empty arguments", {
+  expect_identical(exprs(, , .ignore_empty = "none"), set_names(list(missing_arg(), missing_arg()), c("", "")))
+})
+
+test_that("dots are always named", {
+  expect_named(dots_list("foo"), "")
+  expect_named(dots_splice("foo", list("bar")), c("", ""))
+  expect_named(exprs(foo, bar), c("", ""))
+})
+
+test_that("dots can be spliced", {
+  expect_identical(dots_values(!!! list(letters)), named_list(splice(list(letters))))
+  expect_identical(flatten(dots_values(!!! list(letters))), list(letters))
+  expect_identical(ll(!!! list(letters)), list(letters))
+  wrapper <- function(...) ll(...)
+  expect_identical(wrapper(!!! list(letters)), list(letters))
+})
+
+test_that("interpolation by value does not guard formulas", {
+  expect_identical(dots_values(~1), named_list(~1))
+})
+
+test_that("dots names can be unquoted", {
+  expect_identical(dots_values(!! paste0("foo", "bar") := 10), list(foobar = 10))
+})
+
+test_that("can take forced dots with strict = FALSE", {
+  fn <- function(strict, ...) {
+    force(..1)
+    captureDots(strict)
+  }
+  expect_error(fn(TRUE, letters), "already been evaluated")
+  expect_identical(fn(FALSE, letters), NULL)
+})
+
+test_that("dots_values() handles forced dots", {
+  fn <- function(...) {
+    force(..1)
+    dots_values(...)
+  }
+  expect_identical(fn("foo"), named_list("foo"))
+
+  expect_identical(lapply(1:2, function(...) dots_values(...)), list(named_list(1L), named_list(2L)))
+})
+
+test_that("cleans empty arguments", {
+  expect_identical(dots_list(1, a = ), named_list(1))
+  expect_identical(ll(1, a = ), list(1))
+
+  expect_identical(dots_list(, 1, a = , .ignore_empty = "all"), named_list(1))
+})
diff --git a/tests/testthat/test-encoding.R b/tests/testthat/test-encoding.R
new file mode 100644
index 0000000..3db182b
--- /dev/null
+++ b/tests/testthat/test-encoding.R
@@ -0,0 +1,39 @@
+context("encoding")
+
+test_that("can roundtrip symbols in non-UTF8 locale", {
+  with_non_utf8_locale({
+    expect_identical(
+      as_string(sym(get_alien_lang_string())),
+      get_alien_lang_string()
+    )
+  })
+})
+
+test_that("Unicode escapes are always converted to UTF8 characters on roundtrip", {
+  expect_identical(
+    as_string(sym("<U+5E78><U+798F>")),
+    "\u5E78\u798F"
+  )
+})
+
+test_that("Unicode escapes are always converted to UTF8 characters in as_list()", {
+  with_non_utf8_locale({
+    env <- child_env(empty_env())
+    env_bind(env, !! get_alien_lang_string() := NULL)
+    list <- as_list(env)
+    expect_identical(names(list), get_alien_lang_string())
+  })
+})
+
+test_that("Unicode escapes are always converted to UTF8 characters with env_names()", {
+  with_non_utf8_locale({
+    env <- child_env(empty_env())
+    env_bind(env, !! get_alien_lang_string() := NULL)
+    expect_identical(env_names(env), get_alien_lang_string())
+  })
+})
+
+test_that("Unicode escapes are always converted to UTF8 in quos()", {
+  q <- quos(`<U+5E78><U+798F>` = 1)
+  expect_identical(names(q), "\u5e78\u798f")
+})
diff --git a/tests/testthat/test-env.R b/tests/testthat/test-env.R
new file mode 100644
index 0000000..ff7da3c
--- /dev/null
+++ b/tests/testthat/test-env.R
@@ -0,0 +1,180 @@
+context("environments")
+
+test_that("get_env() returns current frame by default", {
+  fn <- function() expect_identical(get_env(), environment())
+  fn()
+})
+
+test_that("env_parent() returns enclosure frame by default", {
+  enclos_env <- child_env(pkg_env("rlang"))
+  fn <- with_env(enclos_env, function() env_parent())
+  expect_identical(fn(), enclos_env)
+})
+
+test_that("child_env() has correct parent", {
+  env <- child_env(empty_env())
+  expect_false(env_has(env, "list", inherit = TRUE))
+
+  fn <- function() list(new = child_env(get_env()), env = environment())
+  out <- fn()
+  expect_identical(env_parent(out$new), out$env)
+
+  expect_identical(env_parent(child_env(NULL)), empty_env())
+  expect_identical(env_parent(child_env("base")), base_env())
+})
+
+test_that("env_parent() reports correct parent", {
+  env <- child_env(child_env(NULL, obj = "b"), obj = "a")
+
+  expect_identical(env_parent(env, 1)$obj, "b")
+  expect_identical(env_parent(env, 2), empty_env())
+  expect_identical(env_parent(env, 3), empty_env())
+})
+
+test_that("env_tail() climbs env chain", {
+  expect_identical(env_tail(global_env()), base_env())
+})
+
+test_that("promises are created", {
+  env <- child_env(NULL)
+
+  env_bind_exprs(env, foo = bar <- "bar")
+  expect_false(env_has(get_env(), "bar"))
+
+  force(env$foo)
+  expect_true(env_has(get_env(), "bar"))
+
+  env_bind_exprs(env, stop = stop("forced"))
+  expect_error(env$stop, "forced")
+})
+
+test_that("with_env() evaluates within correct environment", {
+  fn <- function() {
+    g(get_env())
+    "normal return"
+  }
+  g <- function(env) {
+    with_env(env, return("early return"))
+  }
+  expect_equal(fn(), "early return")
+})
+
+test_that("locally() evaluates within correct environment", {
+  env <- child_env("rlang")
+  local_env <- with_env(env, locally(get_env()))
+  expect_identical(env_parent(local_env), env)
+})
+
+test_that("ns_env() returns current namespace", {
+  expect_identical(with_env(ns_env("rlang"), ns_env()), get_env(rlang::get_env))
+})
+
+test_that("ns_imports_env() returns imports env", {
+  expect_identical(with_env(ns_env("rlang"), ns_imports_env()), env_parent(get_env(rlang::get_env)))
+})
+
+test_that("ns_env_name() returns namespace name", {
+  expect_identical(with_env(ns_env("base"), ns_env_name()), "base")
+  expect_identical(ns_env_name(rlang::get_env), "rlang")
+})
+
+test_that("as_env() dispatches correctly", {
+  expect_identical(as_env("base"), base_env())
+  expect_false(env_has(as_env(set_names(letters)), "map"))
+
+  expect_identical(as_env(NULL), empty_env())
+
+  expect_true(all(env_has(as_env(mtcars), names(mtcars))))
+  expect_identical(env_parent(as_env(mtcars)), empty_env())
+  expect_identical(env_parent(as_env(mtcars, base_env())), base_env())
+})
+
+test_that("env_inherits() finds ancestor", {
+  env <- child_env(get_env())
+  env <- child_env(env)
+  expect_true(env_inherits(env, get_env()))
+  expect_false(env_inherits(env, ns_env("utils")))
+
+  expect_true(env_inherits(empty_env(), empty_env()))
+})
+
+test_that("env() creates child of current environment", {
+  env <- env(a = 1, b = "foo")
+  expect_identical(env_parent(env), get_env())
+  expect_identical(env$b, "foo")
+})
+
+test_that("set_env() sets current env by default", {
+  quo <- set_env(locally(~foo))
+  expect_identical(f_env(quo), get_env())
+})
+
+test_that("finds correct env type", {
+  expect_identical(identity(env_type(ctxt_frame(2)$env)), "frame")
+  expect_identical(env_type(global_env()), "global")
+  expect_identical(env_type(empty_env()), "empty")
+  expect_identical(env_type(base_env()), "base")
+})
+
+test_that("get_env() fails if no default", {
+  expect_error(get_env(list()), "Can't extract an environment from a list")
+})
+
+test_that("get_env() picks up default", {
+  dft <- env()
+  expect_identical(get_env(list(), dft), dft)
+  expect_identical(get_env("a", dft), dft)
+})
+
+test_that("with_env() handles data", {
+  expect_identical(with_env(mtcars, cyl), mtcars$cyl)
+
+  foo <- "foo"
+  expect_identical(with_env(mtcars, foo), "foo")
+})
+
+test_that("with_env() evaluates in env", {
+  env <- env()
+  expect_identical(with_env(env, get_env()), env)
+})
+
+test_that("env_depth() counts parents", {
+  expect_identical(env_depth(child_env(child_env(NULL))), 2L)
+  expect_identical(env_depth(empty_env()), 0L)
+})
+
+test_that("env_parents() returns all parents", {
+  expect_identical(env_parents(empty_env()), ll())
+  env1 <- child_env(NULL)
+  env2 <- child_env(env1)
+  expect_identical(env_parents(env2), ll(env1, empty_env()))
+})
+
+test_that("scoped_envs() includes global and empty envs", {
+  envs <- scoped_envs()
+  expect_identical(envs[[1]], global_env())
+  expect_identical(envs[[length(envs)]], empty_env())
+})
+
+test_that("scoped_envs() returns named environments", {
+  expect_identical(names(scoped_envs()), scoped_names())
+})
+
+test_that("scoped_env() deals with empty environment", {
+  expect_identical(scoped_env("NULL"), empty_env())
+})
+
+test_that("env() doesn't partial match on env_bind()'s .env", {
+  expect_true(all(env_has(env(.data = 1, . = 2), c(".data", "."))))
+})
+
+test_that("new_environment() creates a child of the empty env", {
+  env <- new_environment(list(a = 1, b = 2))
+  expect_true(all(env_has(env, c("a", "b"))))
+  expect_identical(env_parent(env), empty_env())
+})
+
+test_that("new_environment() accepts empty vectors", {
+  expect_identical(length(new_environment()), 0L)
+  expect_identical(length(new_environment(dbl())), 0L)
+})
diff --git a/tests/testthat/test-eval.R b/tests/testthat/test-eval.R
new file mode 100644
index 0000000..90782fe
--- /dev/null
+++ b/tests/testthat/test-eval.R
@@ -0,0 +1,13 @@
+context("invoke")
+
+test_that("invoke() buries arguments", {
+  expect_identical(invoke(call_inspect, 1:2, 3L), quote(.fn(`1`, `2`, `3`)))
+  expect_identical(invoke("call_inspect", 1:2, 3L), quote(call_inspect(`1`, `2`, `3`)))
+  expect_identical(invoke(call_inspect, 1:2, 3L, .bury = c("foo", "bar")), quote(foo(`bar1`, `bar2`, `bar3`)))
+  expect_identical(invoke(call_inspect, 1:2, 3L, .bury = NULL), as.call(list(call_inspect, 1L, 2L, 3L)))
+})
+
+test_that("invoke() can be called without arguments", {
+  expect_identical(invoke("list"), list())
+  expect_identical(invoke(list), list())
+})
diff --git a/tests/testthat/test-fn.R b/tests/testthat/test-fn.R
new file mode 100644
index 0000000..4082dfe
--- /dev/null
+++ b/tests/testthat/test-fn.R
@@ -0,0 +1,74 @@
+context("function")
+
+test_that("new_function equivalent to regular function", {
+  f1 <- function(x = a + b, y) {
+    x + y
+  }
+  attr(f1, "srcref") <- NULL
+
+  f2 <- new_function(alist(x = a + b, y =), quote({x + y}))
+
+  expect_equal(f1, f2)
+})
+
+test_that("prim_name() extracts names", {
+  expect_equal(prim_name(c), "c")
+  expect_equal(prim_name(prim_eval), "eval")
+})
+
+test_that("as_closure() returns closure", {
+  expect_identical(typeof(as_closure(list)), "closure")
+  expect_identical(typeof(as_closure("list")), "closure")
+})
+
+test_that("as_closure() handles primitive functions", {
+  expect_identical(as_closure(`c`)(1, 3, 5), c(1, 3, 5))
+  expect_identical(as_closure(is.null)(1), FALSE)
+  expect_identical(as_closure(is.null)(NULL), TRUE)
+})
+
+test_that("as_closure() handles operators", {
+  expect_identical(as_closure(`-`)(.y = 10, .x = 5), -5)
+  expect_identical(as_closure(`-`)(5), -5)
+  expect_identical(as_closure(`$`)(mtcars, cyl), mtcars$cyl)
+  expect_identical(as_closure(`~`)(foo), ~foo)
+  expect_identical(as_closure(`~`)(foo, bar), foo ~ bar)
+  expect_warning(expect_identical(as_closure(`{`)(warn("foo"), 2, 3), 3), "foo")
+
+  x <- "foo"
+  as_closure(`<-`)(x, "bar")
+  expect_identical(x, "bar")
+
+  x <- list(a = 1, b = 2)
+  as_closure(`$<-`)(x, b, 20)
+  expect_identical(x, list(a = 1, b = 20))
+
+  x <- list(1, 2)
+  as_closure(`[[<-`)(x, 2, 20)
+  expect_identical(x, list(1, 20))
+
+  expect_identical(as_closure(`[<-`)(data.frame(x = 1:2, y = 3:4), 2, 2, 10L), data.frame(x = 1:2, y = c(3L, 10L)))
+  expect_identical(as_closure(`[[<-`)(list(1, 2), 2, 20), list(1, 20))
+
+  x <- ll(ll(a = "A"), ll(a = "B"))
+  expect_identical(lapply(x, as_closure(`[[`), "a"), list("A", "B"))
+})
+
+test_that("lambda shortcut handles positional arguments", {
+  expect_identical(as_function(~ ..1 + ..3)(1, 2, 3), 4)
+})
+
+test_that("lambda shortcut fails with two-sided formulas", {
+  expect_error(as_function(lhs ~ ..1 + ..3), "two-sided formula")
+})
+
+test_that("as_function() handles strings", {
+  expect_identical(as_function("mean"), mean)
+
+  env <- env(fn = function() NULL)
+  expect_identical(as_function("fn", env), env$fn)
+})
+
+test_that("fn_fmls_syms() unnames `...`", {
+  expect_identical(fn_fmls_syms(lapply), list(X = quote(X), FUN = quote(FUN), quote(...)))
+})
diff --git a/tests/testthat/test-formula.R b/tests/testthat/test-formula.R
new file mode 100644
index 0000000..7d382c3
--- /dev/null
+++ b/tests/testthat/test-formula.R
@@ -0,0 +1,151 @@
+context("formula")
+
+# Creation ----------------------------------------------------------------
+
+test_that("env must be an environment", {
+  expect_error(new_quosure(quote(a), env = list()), "must be an environment")
+})
+
+test_that("equivalent to ~", {
+  f1 <- ~abc
+  f2 <- new_quosure(quote(abc))
+
+  expect_identical(set_attrs(f1, class = c("quosure", "formula")), f2)
+})
+
+test_that("is_formula works", {
+  expect_true(is_formula(~10))
+  expect_false(is_formula(10))
+})
+
+test_that("as_quosure() uses correct env", {
+  fn <- function(expr, env = caller_env()) {
+    f <- as_quosure(expr, env)
+    list(env = get_env(), f = g(f))
+  }
+  g <- function(expr, env = caller_env()) {
+    as_quosure(expr, env)
+  }
+  f_env <- child_env(NULL)
+  f <- new_quosure(quote(expr), f_env)
+
+  out_expr_default <- fn(quote(expr))
+  out_f_default <- fn(f)
+  expect_identical(f_env(out_expr_default$f), get_env())
+  expect_identical(f_env(out_f_default$f), f_env)
+
+  user_env <- child_env(NULL)
+  out_expr <- fn(quote(expr), user_env)
+  out_f <- fn(f, user_env)
+  expect_identical(f_env(out_expr$f), user_env)
+  expect_identical(out_f$f, f)
+})
+
+
+# Getters -----------------------------------------------------------------
+
+test_that("throws errors for bad inputs", {
+  expect_error(f_rhs(1), "must be a formula")
+  expect_error(f_rhs(`~`()), "Invalid formula")
+  expect_error(f_rhs(`~`(1, 2, 3)), "Invalid formula")
+
+  expect_error(f_lhs(1), "must be a formula")
+  expect_error(f_lhs(`~`()), "Invalid formula")
+  expect_error(f_lhs(`~`(1, 2, 3)), "Invalid formula")
+
+  expect_error(f_env(1), "must be a formula")
+})
+
+test_that("extracts call, name, or scalar", {
+  expect_identical(f_rhs(~ x), quote(x))
+  expect_identical(f_rhs(~ f()), quote(f()))
+  expect_identical(f_rhs(~ 1L), 1L)
+})
+
+
+# Setters -----------------------------------------------------------------
+
+test_that("can replace RHS of one-sided formula", {
+  f <- ~ x1
+  f_rhs(f) <- quote(x2)
+
+  expect_equal(f, ~ x2)
+})
+
+test_that("can replace both sides of two-sided formula", {
+  f <- x1 ~ y1
+  f_lhs(f) <- quote(x2)
+  f_rhs(f) <- quote(y2)
+
+  expect_equal(f, x2 ~ y2)
+})
+
+test_that("can remove lhs of two-sided formula", {
+  f <- x ~ y
+  f_lhs(f) <- NULL
+
+  expect_equal(f, ~ y)
+})
+
+test_that("can modify environment", {
+  f <- x ~ y
+  env <- new.env()
+  f_env(f) <- env
+
+  expect_equal(f_env(f), env)
+})
+
+test_that("setting RHS preserves attributes", {
+  attrs <- list(foo = "bar", class = "baz")
+
+  f <- set_attrs(~foo, !!! attrs)
+  f_rhs(f) <- quote(bar)
+
+  expect_identical(f, set_attrs(~bar, !!! attrs))
+})
+
+test_that("setting LHS preserves attributes", {
+  attrs <- list(foo = "bar", class = "baz")
+
+  f <- set_attrs(~foo, !!! attrs)
+  f_lhs(f) <- quote(bar)
+
+  expect_identical(f, set_attrs(bar ~ foo, !!! attrs))
+
+  f_lhs(f) <- quote(baz)
+  expect_identical(f, set_attrs(baz ~ foo, !!! attrs))
+})
+
+test_that("setting environment preserves attributes", {
+  attrs <- list(foo = "bar", class = "baz")
+  env <- env()
+
+  f <- set_attrs(~foo, !!! attrs)
+  f_env(f) <- env
+  expect_identical(f, set_attrs(~foo, !!! attrs, .Environment = env))
+})
+
+
+# Utils --------------------------------------------------------------
+
+test_that("quosures are not recognised as bare formulas", {
+  expect_false(is_bare_formula(quo(foo)))
+})
+
+test_that("lhs is inspected", {
+  expect_true(is_formula(~foo))
+
+  expect_false(is_formula(~foo, lhs = TRUE))
+  expect_true(is_formula(~foo, lhs = FALSE))
+
+  expect_true(is_formula(foo ~ bar, lhs = TRUE))
+  expect_false(is_formula(foo ~ bar, lhs = FALSE))
+})
+
+test_that("definitions are not formulas but are formulaish", {
+  expect_false(is_formula(foo := bar))
+  expect_true(is_formulaish(foo := bar, lhs = TRUE))
+  expect_false(is_formulaish(foo := bar, lhs = FALSE))
+  expect_false(is_formulaish(foo := bar, scoped = TRUE, lhs = FALSE))
+  expect_false(is_formulaish(foo := bar, scoped = FALSE, lhs = TRUE))
+})
diff --git a/tests/testthat/test-lang-call.R b/tests/testthat/test-lang-call.R
new file mode 100644
index 0000000..5f16815
--- /dev/null
+++ b/tests/testthat/test-lang-call.R
@@ -0,0 +1,126 @@
+context("lang-call")
+
+# Creation ----------------------------------------------------------------
+
+test_that("character vector must be length 1", {
+  expect_error(lang(letters), "must be a length 1 string")
+})
+
+test_that("args can be specified individually or as list", {
+  out <- lang("f", a = 1, splice(list(b = 2)))
+  expect_equal(out, quote(f(a = 1, b = 2)))
+})
+
+test_that("creates namespaced calls", {
+  expect_identical(lang("fun", foo = quote(baz), .ns = "bar"), quote(bar::fun(foo = baz)))
+})
+
+test_that("fails with non-callable objects", {
+  expect_error(lang(1), "non-callable")
+  expect_error(lang(get_env()), "non-callable")
+})
+
+test_that("succeeds with literal functions", {
+  expect_error(regex = NA, lang(base::mean, 1:10))
+  expect_error(regex = NA, lang(base::list, 1:10))
+})
+
+
+# Standardisation ---------------------------------------------------------
+
+test_that("can standardise call frame", {
+  fn <- function(foo = "bar") lang_standardise(call_frame())
+  expect_identical(fn(), quote(fn()))
+  expect_identical(fn("baz"), quote(fn(foo = "baz")))
+})
+
+test_that("can modify call frame", {
+  fn <- function(foo = "bar") lang_modify(call_frame(), baz = "bam", .standardise = TRUE)
+  expect_identical(fn(), quote(fn(baz = "bam")))
+  expect_identical(fn("foo"), quote(fn(foo = "foo", baz = "bam")))
+})
+
+
+# Modification ------------------------------------------------------------
+
+test_that("can modify formulas inplace", {
+  expect_identical(lang_modify(~matrix(bar), quote(foo)), ~matrix(bar, foo))
+})
+
+test_that("optional standardisation", {
+  expect_identical(lang_modify(~matrix(bar), quote(foo), .standardise = TRUE), ~matrix(data = bar, foo))
+})
+
+test_that("new args inserted at end", {
+  call <- quote(matrix(1:10))
+  out <- lang_modify(call, nrow = 3, .standardise = TRUE)
+  expect_equal(out, quote(matrix(data = 1:10, nrow = 3)))
+})
+
+test_that("new args replace old", {
+  call <- quote(matrix(1:10))
+  out <- lang_modify(call, data = 3, .standardise = TRUE)
+  expect_equal(out, quote(matrix(data = 3)))
+})
+
+test_that("can modify calls for primitive functions", {
+  expect_identical(lang_modify(~list(), foo = "bar", .standardise = TRUE), ~list(foo = "bar"))
+})
+
+test_that("can modify calls for functions containing dots", {
+  expect_identical(lang_modify(~mean(), na.rm = TRUE, .standardise = TRUE), ~mean(na.rm = TRUE))
+})
+
+test_that("accepts unnamed arguments", {
+  expect_identical(
+    lang_modify(~get(), "foo", envir = "bar", "baz", .standardise = TRUE),
+    ~get(envir = "bar", "foo", "baz")
+  )
+})
+
+test_that("fails with duplicated arguments", {
+  expect_error(lang_modify(~mean(), na.rm = TRUE, na.rm = FALSE), "Duplicate arguments")
+  expect_error(lang_modify(~mean(), TRUE, FALSE), NA)
+})
+
+
+# Utils --------------------------------------------------------------
+
+test_that("lang_name() handles namespaced and anonymous calls", {
+  expect_equal(lang_name(quote(foo::bar())), "bar")
+  expect_equal(lang_name(quote(foo:::bar())), "bar")
+
+  expect_null(lang_name(quote(foo at bar())))
+  expect_null(lang_name(quote(foo$bar())))
+  expect_null(lang_name(quote(foo[[bar]]())))
+  expect_null(lang_name(quote(foo()())))
+  expect_null(lang_name(quote(foo::bar()())))
+  expect_null(lang_name(quote((function() NULL)())))
+})
+
+test_that("lang_name() handles formulas and frames", {
+  expect_identical(lang_name(~foo(baz)), "foo")
+
+  fn <- function() lang_name(call_frame())
+  expect_identical(fn(), "fn")
+})
+
+test_that("lang_fn() extracts function", {
+  fn <- function() lang_fn(call_frame())
+  expect_identical(fn(), fn)
+
+  expect_identical(lang_fn(~matrix()), matrix)
+})
+
+test_that("Inlined functions return NULL name", {
+  call <- quote(fn())
+  call[[1]] <- function() {}
+  expect_null(lang_name(call))
+})
+
+test_that("lang_args() and lang_args_names()", {
+  expect_identical(lang_args(~fn(a, b)), set_names(list(quote(a), quote(b)), c("", "")))
+
+  fn <- function(a, b) lang_args_names(call_frame())
+  expect_identical(fn(a = foo, b = bar), c("a", "b"))
+})
diff --git a/tests/testthat/test-lang-expr.R b/tests/testthat/test-lang-expr.R
new file mode 100644
index 0000000..9275933
--- /dev/null
+++ b/tests/testthat/test-lang-expr.R
@@ -0,0 +1,49 @@
+context("lang-expr")
+
+# expr_text() --------------------------------------------------------
+
+test_that("always returns single string", {
+  out <- expr_text(quote({
+    a + b
+  }))
+  expect_length(out, 1)
+})
+
+test_that("can truncate lines", {
+  out <- expr_text(quote({
+    a + b
+  }), nlines = 2)
+  expect_equal(out, "{\n...")
+})
+
+
+# expr_label() -------------------------------------------------------
+
+test_that("quotes strings", {
+  expect_equal(expr_label("a"), '"a"')
+  expect_equal(expr_label("\n"), '"\\n"')
+})
+
+test_that("backquotes names", {
+  expect_equal(expr_label(quote(x)), "`x`")
+})
+
+test_that("converts atomics to strings", {
+  expect_equal(expr_label(0.5), "0.5")
+})
+
+test_that("truncates long calls", {
+  expect_equal(expr_label(quote({ a + b })), "`{\n    ...\n}`")
+})
+
+
+# expr_name() --------------------------------------------------------
+
+test_that("name symbols, calls, and scalars", {
+  expect_identical(expr_name(quote(foo)), "foo")
+  expect_identical(expr_name(quote(foo(bar))), "foo(bar)")
+  expect_identical(expr_name(1L), "1")
+  expect_identical(expr_name("foo"), "foo")
+  expect_identical(expr_name(function() NULL), "function () ...")
+  expect_error(expr_name(1:2), "must quote a symbol, scalar, or call")
+})
diff --git a/tests/testthat/test-lang.R b/tests/testthat/test-lang.R
new file mode 100644
index 0000000..ba108d4
--- /dev/null
+++ b/tests/testthat/test-lang.R
@@ -0,0 +1,66 @@
+context("lang")
+
+test_that("NULL is a valid language object", {
+  expect_true(is_expr(NULL))
+})
+
+test_that("is_lang() pattern-matches", {
+  expect_true(is_lang(quote(foo(bar)), "foo"))
+  expect_false(is_lang(quote(foo(bar)), "bar"))
+  expect_true(is_lang(quote(foo(bar)), quote(foo)))
+
+  expect_true(is_lang(quote(foo(bar)), "foo", n = 1))
+  expect_false(is_lang(quote(foo(bar)), "foo", n = 2))
+
+  expect_true(is_lang(quote(foo::bar())), quote(foo::bar()))
+
+  expect_false(is_lang(1))
+  expect_false(is_lang(NULL))
+
+  expect_true(is_unary_lang(quote(+3)))
+  expect_true(is_binary_lang(quote(3 + 3)))
+})
+
+test_that("is_lang() vectorises name", {
+  expect_false(is_lang(quote(foo::bar), c("fn", "fn2")))
+  expect_true(is_lang(quote(foo::bar), c("fn", "::")))
+
+  expect_true(is_lang(quote(foo::bar), quote(`::`)))
+  expect_true(is_lang(quote(foo::bar), list(quote(`@`), quote(`::`))))
+  expect_false(is_lang(quote(foo::bar), list(quote(`@`), quote(`:::`))))
+})
+
+
+# misc -------------------------------------------------------------------
+
+test_that("qualified and namespaced symbols are recognised", {
+  expect_true(is_qualified_lang(quote(foo at baz())))
+  expect_true(is_qualified_lang(quote(foo::bar())))
+  expect_false(is_qualified_lang(quote(foo()())))
+
+  expect_false(is_namespaced_lang(quote(foo at bar())))
+  expect_true(is_namespaced_lang(quote(foo::bar())))
+})
+
+test_that("can specify ns in namespaced predicate", {
+  expr <- quote(foo::bar())
+  expect_false(is_namespaced_lang(expr, quote(bar)))
+  expect_true(is_namespaced_lang(expr, quote(foo)))
+  expect_true(is_namespaced_lang(expr, "foo"))
+})
+
+test_that("can specify ns in is_lang()", {
+  expr <- quote(foo::bar())
+  expect_true(is_lang(expr, ns = NULL))
+  expect_false(is_lang(expr, ns = ""))
+  expect_false(is_lang(expr, ns = "baz"))
+  expect_true(is_lang(expr, ns = "foo"))
+  expect_true(is_lang(expr, name = "bar", ns = "foo"))
+  expect_false(is_lang(expr, name = "baz", ns = "foo"))
+})
+
+test_that("can unnamespace calls", {
+  expect_identical(lang_unnamespace(quote(bar(baz))), quote(bar(baz)))
+  expect_identical(lang_unnamespace(quote(foo::bar(baz))), quote(bar(baz)))
+  expect_identical(lang_unnamespace(quote(foo at bar(baz))), quote(foo at bar(baz)))
+})
diff --git a/tests/testthat/test-operators.R b/tests/testthat/test-operators.R
new file mode 100644
index 0000000..51968c8
--- /dev/null
+++ b/tests/testthat/test-operators.R
@@ -0,0 +1,24 @@
+context("operators")
+
+test_that("%|% returns default value", {
+  lgl <- c(TRUE, TRUE, NA, FALSE) %|% FALSE
+  expect_identical(lgl, c(TRUE, TRUE, FALSE, FALSE))
+
+  int <- c(1L, 2L, NA, 4L) %|% 3L
+  expect_identical(int, 1:4)
+
+  dbl <- c(1, 2, NA, 4) %|% 3
+  expect_identical(dbl, as.double(1:4))
+
+  chr <- c("1", "2", NA, "4") %|% "3"
+  expect_identical(chr, as.character(1:4))
+
+  cpx <- c(1i, 2i, NA, 4i) %|% 3i
+  expect_equal(cpx, c(1i, 2i, 3i, 4i))
+})
+
+test_that("%|% fails with wrong types", {
+  expect_error(c(1L, NA) %|% 2)
+  expect_error(c(1, NA) %|% "")
+})
+
diff --git a/tests/testthat/test-parse.R b/tests/testthat/test-parse.R
new file mode 100644
index 0000000..8375fe3
--- /dev/null
+++ b/tests/testthat/test-parse.R
@@ -0,0 +1,19 @@
+context("parse")
+
+test_that("parse_quosure() etc return correct formulas", {
+  expect_identical(parse_quosure("foo(bar)", "base"), set_env(quo(foo(bar)), base_env()))
+  expect_identical(parse_quosures("foo(bar)\n mtcars", "base"), list(set_env(quo(foo(bar)), base_env()), set_env(quo(mtcars), base_env())))
+})
+
+test_that("parse() requires scalar character", {
+  expect_error(parse_expr(letters), "`x` must be a string or a R connection")
+})
+
+test_that("temporary connections are closed", {
+  path <- tempfile("file")
+  cat("1; 2; mtcars", file = path)
+  conn <- file(path)
+
+  parse_exprs(conn)
+  expect_error(summary(conn), "invalid connection")
+})
diff --git a/tests/testthat/test-quo-enquo.R b/tests/testthat/test-quo-enquo.R
new file mode 100644
index 0000000..5760133
--- /dev/null
+++ b/tests/testthat/test-quo-enquo.R
@@ -0,0 +1,36 @@
+context("quo-unquo")
+
+test_that("explicit promise makes a formula", {
+  capture <- function(x) enquo(x)
+  f1 <- capture(1 + 2 + 3)
+  f2 <- ~ 1 + 2 + 3
+
+  expect_equal(f1, f2)
+})
+
+test_that("explicit promise works only one level deep", {
+  f <- function(x) list(env = get_env(), f = g(x))
+  g <- function(y) enquo(y)
+  out <- f(1 + 2 + 3)
+  expected_f <- with_env(out$env, quo(x))
+
+  expect_identical(out$f, expected_f)
+})
+
+test_that("can capture optimised constants", {
+  arg <- function() {
+    quo("foobar")
+  }
+  arg_bytecode <- compiler::cmpfun(arg)
+
+  expect_identical(arg(), quo("foobar"))
+  expect_identical(arg_bytecode(), quo("foobar"))
+
+  dots <- function() {
+    quos("foo", "bar")
+  }
+  dots_bytecode <- compiler::cmpfun(dots)
+
+  expect_identical(dots(), quos("foo", "bar"))
+  expect_identical(dots_bytecode(), quos("foo", "bar"))
+})
diff --git a/tests/testthat/test-quosure.R b/tests/testthat/test-quosure.R
new file mode 100644
index 0000000..e9e74bd
--- /dev/null
+++ b/tests/testthat/test-quosure.R
@@ -0,0 +1,35 @@
+context("quosure")
+
+test_that("quosures are spliced", {
+  q <- quo(foo(!! quo(bar), !! quo(baz(!! quo(baz), 3))))
+  expect_identical(quo_text(q), "foo(bar, baz(baz, 3))")
+
+  q <- expr_interp(~foo::bar(!! function(x) ...))
+  expect_identical(quo_text(q), "foo::bar(function (x) \n...)")
+
+  q <- quo(!! quo(!! quo(foo(!! quo(!! quo(bar(!! quo(!! quo(!! quo(baz))))))))))
+  expect_identical(quo_text(q), "foo(bar(baz))")
+})
+
+test_that("formulas are not spliced", {
+  expect_identical(quo_text(quo(~foo(~bar))), "~foo(~bar)")
+})
+
+test_that("splicing does not affect original quosure", {
+  f <- ~foo(~bar)
+  quo_text(f)
+  expect_identical(f, ~foo(~bar))
+})
+
+test_that("as_quosure() doesn't convert functions", {
+  expect_identical(as_quosure(base::mean), set_env(quo(!! base::mean), empty_env()))
+})
+
+test_that("as_quosure() coerces formulas", {
+  expect_identical(as_quosure(~foo), quo(foo))
+})
+
+test_that("quo_expr() warns", {
+  expect_warning(regex = NA, quo_expr(quo(foo), warn = TRUE))
+  expect_warning(quo_expr(quo(list(!! quo(foo))), warn = TRUE), "inner quosure")
+})
diff --git a/tests/testthat/test-stack.R b/tests/testthat/test-stack.R
new file mode 100644
index 0000000..44a840c
--- /dev/null
+++ b/tests/testthat/test-stack.R
@@ -0,0 +1,320 @@
+context("evaluation frames") # ---------------------------------------
+
+# Beware some sys.x() take `n` and some take `which`
+test_that("ctxt_frame() caller agrees with sys.parent()", {
+  parent <- sys.parent(n = 1)
+  caller <- ctxt_frame()$caller
+  expect_equal(caller, parent)
+})
+
+test_that("ctxt_frame() expr agrees with sys.call()", {
+  n <- sys.nframe()
+  syscall <- sys.call(which = n)
+  expr <- ctxt_frame()$expr
+  expect_identical(expr, syscall)
+
+  frame <- identity(ctxt_frame())
+  expect_equal(frame$expr, quote(identity(ctxt_frame())))
+})
+
+test_that("ctxt_frame() env agrees with sys.frame()", {
+  n <- sys.nframe()
+  sysframe <- sys.frame(which = n)
+  env <- ctxt_frame()$env
+  expect_identical(env, sysframe)
+})
+
+test_that("context position is correct", {
+  pos1 <- identity(ctxt_frame()$pos)
+  pos2 <- identity(identity(ctxt_frame()$pos))
+
+  pos1 <- fixup_ctxt_depth(pos1)
+  expect_equal(pos1, 1)
+
+  pos2 <- fixup_ctxt_depth(pos2)
+  expect_equal(pos2, 2)
+})
+
+test_that("ctxt_frame(n_depth) returns global frame", {
+  n_depth <- ctxt_depth()
+  frame <- ctxt_frame(n_depth)
+  global <- global_frame()
+  expect_identical(frame, global)
+})
+
+test_that("call_depth() returns correct depth", {
+  depth1 <- identity(call_depth())
+  expect_equal(fixup_call_depth(depth1), 0)
+
+  f <- function() identity(call_depth())
+  g <- function() f()
+  depth2 <- f()
+  depth3 <- g()
+  expect_equal(fixup_call_depth(depth2), 1)
+  expect_equal(fixup_call_depth(depth3), 2)
+
+  expect_equal(fixup_call_depth(f()), 1)
+  expect_equal(fixup_call_depth(g()), 2)
+})
+
+test_that("call_frame()$env is the same as parent.frame()", {
+  f <- function(n) call_frame(n + 1)$env
+  f_base <- function(n) parent.frame(n)
+  env1 <- f(1)
+  env1_base <- f_base(1)
+  expect_identical(env1, env1_base)
+
+  g <- function(n) list(f(n), f_base(n))
+  envs <- g(1)
+  expect_identical(envs[[1]], envs[[2]])
+})
+
+test_that("call_frame()$expr gives expression of caller not previous ctxt", {
+  f <- function(x = 1) call_frame(x)$expr
+  expect_equal(f(), quote(f()))
+
+  g <- function() identity(f(2))
+  expect_equal(g(), quote(g()))
+})
+
+test_that("call_frame(n_depth) returns global frame", {
+  n_depth <- call_depth()
+  expect_identical(call_frame(n_depth), global_frame())
+})
+
+test_that("call_frame(n) throws at correct level", {
+  n <- call_depth()
+  expect_error(call_frame(n + 1), "not that many frames")
+})
+
+test_that("call frames are cleaned", {
+  ctxt_frame_messy <- eval(quote(call_frame(clean = FALSE)), new.env())
+  expect_identical(ctxt_frame_messy$fn, prim_eval)
+
+  ctxt_frame_clean <- eval(quote(call_frame(clean = TRUE)), new.env())
+  expect_identical(ctxt_frame_clean$fn, base::eval)
+})
+
+
+context("evaluation stacks") # ---------------------------------------
+
+test_that("ctxt_stack_callers() agrees with sys.parents()", {
+  parents <- sys.parents()
+  callers <- ctxt_stack_callers()
+  expect_equal(callers, rev(parents))
+})
+
+test_that("ctxt_stack_exprs() agrees with sys.call()", {
+  pos <- sys.nframe()
+  syscalls <- map(seq(pos, 1), sys.call)
+  exprs <- ctxt_stack_exprs()
+  expect_identical(exprs, syscalls)
+})
+
+test_that("ctxt_stack_envs() agrees with sys.frames()", {
+  sysframes <- sys.frames()
+  sysframes <- rev(as.list(sysframes))
+  envs <- ctxt_stack_envs()
+  expect_identical(envs, sysframes)
+})
+
+test_that("ctxt_stack_trail() returns a vector of size nframe", {
+  trail <- ctxt_stack_trail()
+  n <- sys.nframe()
+  expect_equal(length(trail), n)
+})
+
+test_that("ctxt_stack_fns() returns functions in correct order", {
+  f1 <- function(x) f2(x)
+  f2 <- function(x) ctxt_stack_fns()
+  expect_identical(f1()[1:2], list(f2, f1))
+})
+
+test_that("ctxt_stack_fns() handles intervening frames", {
+  fns <- ctxt_stack_fns()
+  intervened_fns <- identity(identity(ctxt_stack_fns()))
+  expect_identical(c(identity, identity, fns), intervened_fns)
+})
+
+test_that("ctxt_stack() handles intervening frames", {
+  stack <- ctxt_stack()
+  intervened_stack <- identity(ctxt_stack())[-1]
+  expect_identical(intervened_stack, stack)
+})
+
+
+test_that("call_stack() trail ignores irrelevant frames", {
+  f1 <- function(x) f2(x)
+  f2 <- function(x) f3()
+  f3 <- function(x) call_stack()
+
+  stack1 <- f1()
+  trail1 <- pluck_int(stack1, "pos")
+  expect_equal(fixup_call_trail(trail1), c(3, 2, 1))
+
+  stack2 <- identity(identity(f1()))
+  trail2 <- pluck_int(stack2, "pos")
+  expect_equal(fixup_call_trail(trail2), c(5, 4, 3))
+})
+
+test_that("ctxt_stack() exprs is in opposite order to sys calls", {
+  syscalls <- sys.calls()
+  stack <- ctxt_stack()
+  stack <- drop_last(stack) # global frame
+  exprs <- pluck(stack, "expr")
+  expect_equal(exprs[[length(exprs)]], syscalls[[1]])
+  expect_equal(exprs[[1]], syscalls[[length(syscalls)]])
+})
+
+test_that("ctxt_stack() and call_stack() agree", {
+  call_stack <- call_stack()
+  call_stack <- drop_last(call_stack) # global frame
+  positions <- map_int(call_stack, `[[`, "pos")
+
+  ctxt_stack <- ctxt_stack()
+  ctxt_stack <- drop_last(ctxt_stack) # global frame
+  ctxt_stack <- rev(ctxt_stack)[positions]
+
+  call_exprs <- map(call_stack, `[[`, "expr")
+  eval_exprs <- map(ctxt_stack, `[[`, "expr")
+  expect_identical(call_exprs, eval_exprs)
+
+  is_eval <- map_lgl(call_stack, function(frame) {
+    identical(frame$fn, base::eval)
+  })
+
+  call_envs <- map(call_stack[!is_eval], `[[`, "env")
+  eval_envs <- map(ctxt_stack[!is_eval], `[[`, "env")
+  expect_identical(call_envs, eval_envs)
+})
+
+test_that("ctxt_stack() subsets n frames", {
+  stack <- ctxt_stack()
+  stack_2 <- ctxt_stack(2)
+  expect_identical(stack_2, stack[1:2])
+
+  n <- ctxt_depth()
+  stack_n <- ctxt_stack(n)
+  expect_identical(stack_n, stack)
+
+  # Get correct eval depth within expect_error()
+  expect_error({ n <- ctxt_depth(); stop() })
+  expect_error(ctxt_stack(n + 1), "not that many frames")
+})
+
+test_that("call_stack() subsets n frames", {
+  stack <- call_stack()
+  stack_2 <- call_stack(2)
+  expect_identical(stack_2, stack[1:2])
+
+  n <- call_depth()
+  stack_n <- call_stack(n)
+  expect_identical(stack_n, stack)
+
+  # Get correct eval depth within expect_error()
+  expect_error({ n <- call_depth(); stop() })
+  expect_error(call_stack(n + 1), "not that many frames")
+})
+
+test_that("call stacks are cleaned", {
+  stack_messy <- eval(quote(call_stack(clean = FALSE)), new.env())[1:2]
+  expect_identical(stack_messy[[1]]$fn, prim_eval)
+  expect_identical(stack_messy[[2]]$fn, base::eval)
+
+  stack_clean <- eval(quote(call_stack(clean = TRUE)), new.env())
+  expect_identical(stack_clean[[1]]$fn, base::eval)
+})
+
+test_that("ctxt_stack() trims layers of calls", {
+  current_stack <- ctxt_stack()
+  expect_identical(identity(identity(ctxt_stack(trim = 1))), current_stack)
+
+  fn <- function(trim) identity(identity(ctxt_stack(trim = trim)))
+  stack <- identity(identity(fn(2)))
+  expect_identical(stack, current_stack)
+})
+
+
+context("frame utils") # ---------------------------------------------
+
+test_that("frame_position() returns correct position", {
+  fn <- function() {
+    env <- environment()
+    pos <- ctxt_frame()$pos
+    g(env, pos)
+  }
+  g <- function(env, fn_pos) {
+    pos <- frame_position(env)
+    expect_identical(pos, fn_pos)
+
+    burried_pos <- identity(identity(frame_position(env)))
+    expect_identical(burried_pos, pos)
+  }
+  fn()
+})
+
+test_that("frame_position_current() computes distance from a frame", {
+  fn <- function() {
+    g(environment())
+  }
+  g <- function(env) {
+    distance <- frame_position(env, from = "current")
+    frame <- ctxt_frame(distance)
+    expect_identical(frame$env, env)
+
+    burried_distance <- identity(frame_position(env, from = "current"))
+    expect_equal(distance, burried_distance)
+  }
+  fn()
+})
+
+test_that("evaluation stack is trimmed from layers of calls", {
+  stack <- ctxt_stack()
+  trimmed_stack <- identity(stack_trim(identity(ctxt_stack())))
+  expect_identical(stack, trimmed_stack)
+})
+
+test_that("can return from frame", {
+  fn <- function() {
+    val <- g()
+    paste(val, "to fn()")
+  }
+  g <- function(env) {
+    h(environment())
+    stop("g!\n")
+  }
+  h <- function(env) {
+    return_from(env, "returned from h()")
+    stop("h!\n")
+  }
+
+  expect_equal(fn(), "returned from h() to fn()")
+})
+
+test_that("can return to frame", {
+  fn <- function() {
+    val <- identity(g(environment()))
+    paste(val, "to fn()")
+  }
+  g <- function(env) {
+    h(env)
+    stop("g!\n")
+  }
+  h <- function(env) {
+    return_to(env, "returned from h()")
+    stop("h!\n")
+  }
+
+  expect_equal(fn(), "returned from h() to fn()")
+})
+
+test_that("detects frame environment", {
+  expect_true(identity(is_frame_env(ctxt_frame(2)$env)))
+})
+
+test_that("call is not modified in place", {
+  f <- function(...) g(...)
+  g <- function(...) call_stack()[1:2]
+  stack <- f(foo)
+  expect_equal(stack[[1]]$expr, quote(g(...)))
+})
diff --git a/tests/testthat/test-tidy-capture.R b/tests/testthat/test-tidy-capture.R
new file mode 100644
index 0000000..045dd08
--- /dev/null
+++ b/tests/testthat/test-tidy-capture.R
@@ -0,0 +1,208 @@
+context("tidy capture")
+
+test_that("explicit dots make a list of formulas", {
+  fs <- quos(x = 1 + 2, y = 2 + 3)
+  f1 <- as_quosure(~ 1 + 2)
+  f2 <- as_quosure(~ 2 + 3)
+
+  expect_identical(fs$x, f1)
+  expect_identical(fs$y, f2)
+})
+
+test_that("quos() produces correct formulas", {
+  fn <- function(x = a + b, ...) {
+    list(dots = quos(x = x, y = a + b, ...), env = environment())
+  }
+  out <- fn(z = a + b)
+
+  expect_identical(out$dots$x, set_env(quo(x), out$env))
+  expect_identical(out$dots$y, set_env(quo(a + b), out$env))
+  expect_identical(out$dots$z, quo(a + b))
+})
+
+test_that("dots are interpolated", {
+  fn <- function(...) {
+    baz <- "baz"
+    fn_var <- quo(baz)
+    g(..., toupper(!! fn_var))
+  }
+  g <- function(...) {
+    foo <- "foo"
+    g_var <- quo(foo)
+    h(toupper(!! g_var), ...)
+  }
+  h <- function(...) {
+    quos(...)
+  }
+
+  bar <- "bar"
+  var <- quo(bar)
+  dots <- fn(toupper(!!var))
+
+  expect_identical(map(dots, deparse), named_list("~toupper(~foo)", "~toupper(~bar)", "~toupper(~baz)"))
+  expect_identical(map(dots, eval_tidy), named_list("FOO", "BAR", "BAZ"))
+})
+
+test_that("dots capture is stack-consistent", {
+  fn <- function(...) {
+    g(quos(...))
+  }
+  g <- function(dots) {
+    h(dots, foo(bar))
+  }
+  h <- function(dots, ...) {
+    dots
+  }
+  expect_identical(fn(foo(baz)), quos_list(quo(foo(baz))))
+})
+
+test_that("splice is consistently recognised", {
+  expect_true(is_splice(quote(!!! list())))
+  expect_true(is_splice(quote(UQS(list()))))
+  expect_true(is_splice(quote(rlang::UQS(list()))))
+  expect_false(is_splice(quote(ns::UQS(list()))))
+})
+
+test_that("dots can be spliced in", {
+  fn <- function(...) {
+    var <- "var"
+    list(
+      out = g(!!! quos(...), bar(baz), !!! list(a = var, b = ~foo)),
+      env = get_env()
+    )
+  }
+  g <- function(...) {
+    quos(...)
+  }
+
+  out <- fn(foo(bar))
+  expected <- quos_list(
+    quo(foo(bar)),
+    set_env(quo(bar(baz)), out$env),
+    a = quo("var"),
+    b = set_env(quo(!! with_env(out$env, ~foo)), out$env)
+  )
+  expect_identical(out$out, expected)
+})
+
+test_that("spliced dots are wrapped in formulas", {
+  args <- alist(x = var, y = foo(bar))
+  expect_identical(quos(!!! args), quos_list(x = quo(var), y = quo(foo(bar))))
+})
+
+test_that("dot names are interpolated", {
+  var <- "baz"
+  expect_identical(quos(!!var := foo, !!toupper(var) := bar), quos_list(baz = quo(foo), BAZ = quo(bar)))
+  expect_identical(quos(!!var := foo, bar), quos_list(baz = quo(foo), quo(bar)))
+
+  var <- quote(baz)
+  expect_identical(quos(!!var := foo), quos_list(baz = quo(foo)))
+})
+
+test_that("corner cases are handled when interpolating dot names", {
+    var <- na_chr
+    expect_identical(names(quos(!!var := NULL)), na_chr)
+
+    var <- NULL
+    expect_error(quos(!!var := NULL), "must be a name or string")
+})
+
+test_that("definitions are interpolated", {
+  var1 <- "foo"
+  var2 <- "bar"
+  dots <- dots_definitions(def = foo(!!var1) := bar(!!var2))
+
+  pat <- list(lhs = quo(foo("foo")), rhs = quo(bar("bar")))
+  expect_identical(dots$defs$def, pat)
+})
+
+test_that("dots are forwarded to named arguments", {
+  outer <- function(...) inner(...)
+  inner <- function(...) fn(...)
+  fn <- function(x) enquo(x)
+
+  env <- child_env(get_env())
+  expect_identical(with_env(env, outer(foo(bar))), new_quosure(quote(foo(bar)), env))
+})
+
+test_that("pronouns are scoped throughout nested captures", {
+  outer <- function(data, ...) eval_tidy(quos(...)[[1]], data = data)
+  inner <- function(...) map(quos(...), eval_tidy)
+
+  data <- list(foo = "bar", baz = "baz")
+  baz <- "bazz"
+
+  expect_identical(outer(data, inner(foo, baz)), set_names(list("bar", "baz"), c("", "")))
+})
+
+test_that("Can supply := with LHS even if .named = TRUE", {
+  expect_warning(regexp = NA, expect_identical(
+    quos(!!"nm" := 2, .named = TRUE), quos_list(nm = as_quosure(quote(2), empty_env()))
+  ))
+  expect_warning(regexp = "name ignored", expect_identical(
+    quos(foobar = !!"nm" := 2, .named = TRUE), quos_list(nm = as_quosure(quote(2), empty_env()))
+  ))
+})
+
+test_that("RHS of tidy defs are unquoted", {
+  expect_identical(quos(foo := !!"bar"), quos_list(foo = as_quosure(quote("bar"), empty_env())))
+})
+
+test_that("can capture empty list of dots", {
+  fn <- function(...) quos(...)
+  expect_identical(fn(), quos_list())
+})
+
+test_that("quosures are spliced before serialisation", {
+  quosures <- quos(!! quo(foo(!! quo(bar))), .named = TRUE)
+  expect_identical(names(quosures), "foo(bar)")
+})
+
+test_that("missing arguments are captured", {
+  q <- quo()
+  expect_true(is_missing(f_rhs(q)))
+  expect_identical(f_env(q), empty_env())
+})
+
+test_that("empty quosures are forwarded", {
+  inner <- function(x) enquo(x)
+  outer <- function(x) inner(x)
+  expect_identical(outer(), quo())
+
+  inner <- function(x) enquo(x)
+  outer <- function(x) inner(!! enquo(x))
+  expect_identical(outer(), quo())
+})
+
+test_that("quos() captures missing arguments", {
+  expect_identical(quos(, , .ignore_empty = "none"), quos_list(quo(), quo()), c("", ""))
+})
+
+test_that("quos() ignores missing arguments", {
+  expect_identical(quos(, , "foo", ), quos_list(quo(), quo(), new_quosure("foo", empty_env())))
+  expect_identical(quos(, , "foo", , .ignore_empty = "all"), quos_list(new_quosure("foo", empty_env())))
+})
+
+test_that("quosured literals are forwarded as is", {
+  expect_identical(quo(!! quo(NULL)), new_quosure(NULL, empty_env()))
+  expect_identical(quos(!! quo(10L)), set_names(quos_list(new_quosure(10L, empty_env())), ""))
+})
+
+test_that("expr() returns missing argument", {
+  expect_true(is_missing(expr()))
+})
+
+test_that("expr() supports forwarded arguments", {
+  fn <- function(...) g(...)
+  g <- function(...) expr(...)
+  expect_identical(fn(foo), quote(foo))
+})
+
+test_that("can take forced promise with strict = FALSE", {
+  fn <- function(strict, x) {
+    force(x)
+    captureArg(x, strict = strict)
+  }
+  expect_error(fn(TRUE, letters), "already been evaluated")
+  expect_identical(fn(FALSE, letters), NULL)
+})
diff --git a/tests/testthat/test-tidy-eval.R b/tests/testthat/test-tidy-eval.R
new file mode 100644
index 0000000..2bc4d61
--- /dev/null
+++ b/tests/testthat/test-tidy-eval.R
@@ -0,0 +1,217 @@
+context("eval_tidy") # --------------------------------------------------
+
+test_that("accepts expressions", {
+  expect_identical(eval_tidy(10), 10)
+  expect_identical(eval_tidy(quote(letters)), letters)
+})
+
+test_that("eval_tidy uses formula's environment", {
+  x <- 10
+  f <- local({
+    y <- 100
+    quo(x + y)
+  })
+
+  expect_equal(eval_tidy(f), 110)
+})
+
+test_that("data must be a dictionary", {
+  expect_error(eval_tidy(NULL, list(x = 10, x = 11)), "Data source must be a dictionary")
+})
+
+test_that("looks first in `data`", {
+  x <- 10
+  data <- list(x = 100)
+  expect_equal(eval_tidy(quo(x), data), 100)
+})
+
+test_that("pronouns resolve ambiguity looks first in `data`", {
+  x <- 10
+  data <- list(x = 100)
+  expect_equal(eval_tidy(quo(.data$x), data), 100)
+  expect_equal(eval_tidy(quo(.env$x), data), 10)
+})
+
+test_that("pronouns complain about missing values", {
+  expect_error(eval_tidy(quo(.data$x), list()), "Object `x` not found in data")
+  expect_error(eval_tidy(quo(.data$x), data.frame()), "Column `x` not found in data")
+})
+
+test_that("eval_tidy does quasiquoting", {
+  x <- 10
+  expect_equal(eval_tidy(quo(UQ(quote(x)))), 10)
+})
+
+
+test_that("unquoted formulas look in their own env", {
+  f <- function() {
+    n <- 100
+    quo(n)
+  }
+
+  n <- 10
+  expect_equal(eval_tidy(quo(UQ(f()))), 100)
+})
+
+test_that("unquoted formulas can use data", {
+  f1 <- function() {
+    z <- 100
+    x <- 2
+    quo(x + z)
+  }
+  f2 <- function() {
+    z <- 100
+    quo(.data$x + .env$z)
+  }
+
+  z <- 10
+  expect_identical(eval_tidy(f2(), list(x = 1)), 101)
+  expect_identical(eval_tidy(quo(!! f1()), data = list(x = 1)), 101)
+  expect_identical(eval_tidy(quo(!! f2()), data = list(x = 1)), 101)
+})
+
+test_that("bare formulas are not evaluated", {
+  f <- local(~x)
+  expect_identical(eval_tidy(quo(!! f)), f)
+
+  f <- a ~ b
+  expect_identical(eval_tidy(quo(!! f)), f)
+})
+
+test_that("quosures are not evaluated if not forced", {
+  fn <- function(arg, force) {
+    if (force) arg else "bar"
+  }
+
+  f1 <- quo(fn(!! quo(stop("forced!")), force = FALSE))
+  f2 <- quo(fn(!! local(quo(stop("forced!"))), force = FALSE))
+  expect_identical(eval_tidy(f1), "bar")
+  expect_identical(eval_tidy(f2), "bar")
+
+  f_forced1 <- quo(fn(!! quo(stop("forced!")), force = TRUE))
+  f_forced2 <- quo(fn(!! local(quo(stop("forced!"))), force = TRUE))
+  expect_error(eval_tidy(f_forced1), "forced!")
+  expect_error(eval_tidy(f_forced2), "forced!")
+})
+
+test_that("can unquote captured arguments", {
+  var <- quo(cyl)
+  fn <- function(arg) eval_tidy(enquo(arg), mtcars)
+  expect_identical(fn(var), quo(cyl))
+  expect_identical(fn(!!var), mtcars$cyl)
+})
+
+test_that("quosures are evaluated recursively", {
+  foo <- "bar"
+  expect_identical(eval_tidy(quo(foo)), "bar")
+  expect_identical(eval_tidy(quo(!!quo(!! quo(foo)))), "bar")
+})
+
+test_that("quosures have lazy semantics", {
+  fn <- function(arg) "unforced"
+  expect_identical(eval_tidy(quo(fn(~stop()))), "unforced")
+})
+
+test_that("can unquote hygienically within captured arg", {
+  fn <- function(df, arg) eval_tidy(enquo(arg), df)
+
+  foo <- "bar"; var <- quo(foo)
+  expect_identical(fn(mtcars, list(var, !!var)), list(quo(foo), "bar"))
+
+  var <- quo(cyl)
+  expect_identical(fn(mtcars, (!!var) > 4), mtcars$cyl > 4)
+  expect_identical(fn(mtcars, list(var, !!var)), list(quo(cyl), mtcars$cyl))
+  expect_equal(fn(mtcars, list(~var, !!var)), list(~var, mtcars$cyl))
+  expect_equal(fn(mtcars, list(~~var, !!quo(var), !!quo(quo(var)))), list(~~var, quo(cyl), quo(var)))
+})
+
+test_that("can unquote for old-style NSE functions", {
+  var <- quo(foo)
+  fn <- function(x) substitute(x)
+  expect_identical(quo(fn(!!f_rhs(var))), quo(fn(foo)))
+  expect_identical(eval_tidy(quo(fn(!!f_rhs(var)))), quote(foo))
+})
+
+test_that("all quosures in the call are evaluated", {
+  foobar <- function(x) paste0("foo", x)
+  x <- new_quosure(call("foobar", local({ bar <- "bar"; quo(bar) })))
+  f <- new_quosure(call("identity", x))
+  expect_identical(eval_tidy(f), "foobar")
+})
+
+test_that("two-sided formulas are not treated as quosures", {
+  expect_identical(eval_tidy(new_quosure(a ~ b)), a ~ b)
+})
+
+test_that("formulas are evaluated in evaluation environment", {
+  f <- eval_tidy(quo(foo ~ bar), list(foo = "bar"))
+  expect_false(identical(f_env(f), get_env()))
+})
+
+test_that("evaluation env is cleaned up", {
+  f <- local(quo(function() list(f = ~letters, env = environment())))
+  fn <- eval_tidy(f)
+  out <- fn()
+  expect_identical(out$f, with_env(env = out$env, ~letters))
+})
+
+test_that("inner formulas are rechained to evaluation env", {
+  env <- child_env(NULL)
+  f1 <- quo(env$eval_env1 <- get_env())
+  f2 <- quo({
+    !! f1
+    env$eval_env2 <- get_env()
+  })
+
+  eval_tidy(f2, mtcars)
+  expect_identical(env$eval_env1, env$eval_env2)
+  expect_true(env_inherits(env$eval_env2, get_env(f2)))
+})
+
+test_that("dyn scope is chained to lexical env", {
+  foo <- "bar"
+  overscope <- child_env(NULL)
+  expect_identical(eval_tidy_(quo(foo), overscope), "bar")
+})
+
+test_that("whole scope is purged", {
+  outside <- child_env(NULL, important = TRUE)
+  top <- child_env(outside, foo = "bar", hunoz = 1)
+  mid <- child_env(top, bar = "baz", hunoz = 2)
+  bottom <- child_env(mid, !!! list(.top_env = top, .env = 1, `~` = 2))
+
+  overscope_clean(bottom)
+
+  expect_identical(names(bottom), character(0))
+  expect_identical(names(mid), character(0))
+  expect_identical(names(top), character(0))
+  expect_identical(names(outside), "important")
+})
+
+test_that("empty quosure self-evaluates", {
+  quo <- quo(is_missing(!! quo()))
+  expect_true(eval_tidy(quo))
+})
+
+test_that("cannot replace elements of pronouns", {
+  expect_error(eval_tidy(quo(.data$foo <- "bar")), "read-only dictionary")
+})
+
+test_that("formulas are not evaluated as quosures", {
+  expect_identical(eval_tidy(~letters), ~letters)
+})
+
+test_that("can supply environment as data", {
+  `_x` <- "foo"
+  expect_identical(eval_tidy(quo(`_x`), environment()), "foo")
+  expect_error(eval_tidy(quo(`_y`), environment()), "not found")
+})
+
+test_that("tilde calls are evaluated in overscope", {
+  quo <- quo({
+    foo <- "foo"
+    ~foo
+  })
+  f <- eval_tidy(quo)
+  expect_true(env_has(f, "foo"))
+})
diff --git a/tests/testthat/test-tidy-unquote.R b/tests/testthat/test-tidy-unquote.R
new file mode 100644
index 0000000..11883cb
--- /dev/null
+++ b/tests/testthat/test-tidy-unquote.R
@@ -0,0 +1,180 @@
+context("unquote")
+
+test_that("interpolation does not recurse over spliced arguments", {
+  var1 <- quote(!! stop())
+  var2 <- quote({foo; !! stop(); bar})
+  expect_error(quo(list(!!! var1)), NA)
+  expect_error(expr(list(!!! var2)), NA)
+})
+
+test_that("formulas containing unquote operators are interpolated", {
+  var1 <- quo(foo)
+  var2 <- local({ foo <- "baz"; quo(foo) })
+
+  f <- expr_interp(~list(!!var1, !!var2))
+  expect_identical(f, new_quosure(lang("list", as_quosure(var1), as_quosure(var2))))
+})
+
+test_that("interpolation is carried out in the right environment", {
+  f <- local({ foo <- "foo"; ~!!foo })
+  expect_identical(expr_interp(f), new_quosure("foo", env = f_env(f)))
+})
+
+test_that("interpolation now revisits unquoted formulas", {
+  f <- ~list(!!~!!stop("should not interpolate within formulas"))
+  f <- expr_interp(f)
+  # This used to be idempotent:
+  expect_error(expect_false(identical(expr_interp(f), f)), "interpolate within formulas")
+})
+
+test_that("formulas are not treated as quosures", {
+  expect_identical(expr(a ~ b), quote(a ~ b))
+  expect_identical(expr(~b), quote(~b))
+  expect_identical(expr(!!~b), ~b)
+})
+
+test_that("unquote operators are always in scope", {
+  env <- child_env("base", foo = "bar")
+  f <- with_env(env, ~UQ(foo))
+  expect_identical(expr_interp(f), new_quosure("bar", env))
+})
+
+test_that("can interpolate in specific env", {
+  foo <- "bar"
+  env <- child_env(NULL, foo = "foo")
+  expect_identical(expr_interp(~UQ(foo)), set_env(quo("bar")))
+  expect_identical(expr_interp(~UQ(foo), env), set_env(quo("foo")))
+})
+
+test_that("can qualify operators with namespace", {
+  # Should remove prefix only if rlang-qualified:
+  expect_identical(quo(rlang::UQ(toupper("a"))), new_quosure("A", empty_env()))
+  expect_identical(quo(list(rlang::UQS(list(a = 1, b = 2)))), quo(list(a = 1, b = 2)))
+
+  # Should keep prefix otherwise:
+  expect_identical(quo(other::UQ(toupper("a"))), quo(other::"A"))
+  expect_identical(quo(x$UQ(toupper("a"))), quo(x$"A"))
+})
+
+test_that("unquoting is frame-consistent", {
+  defun <- quote(!! function() NULL)
+  env <- child_env("base")
+  expect_identical(fn_env(expr_interp(defun, env)), env)
+})
+
+test_that("unquoted quosure has S3 class", {
+  quo <- quo(!! ~quo)
+  expect_is(quo, "quosure")
+})
+
+test_that("unquoted quosures are not guarded", {
+  quo <- eval_tidy(quo(quo(!! ~quo)))
+  expect_true(is_quosure(quo))
+})
+
+
+# UQ ----------------------------------------------------------------------
+
+test_that("evaluates contents of UQ()", {
+  expect_equal(quo(UQ(1 + 2)), ~ 3)
+})
+
+test_that("quosures are not rewrapped", {
+  var <- quo(!! quo(letters))
+  expect_identical(quo(!!var), quo(letters))
+
+  var <- new_quosure(local(~letters), env = child_env(get_env()))
+  expect_identical(quo(!!var), var)
+})
+
+test_that("UQ() fails if called without argument", {
+  expect_equal(quo(UQ(NULL)), ~NULL)
+  expect_equal(quo(rlang::UQ(NULL)), ~NULL)
+  expect_error(quo(UQ()), "must be called with an argument")
+  expect_error(quo(rlang::UQ()), "must be called with an argument")
+})
+
+
+# UQS ---------------------------------------------------------------------
+
+test_that("contents of UQS() must be a vector or language object", {
+  expect_error(quo(1 + UQS(environment())), "`x` must be a vector")
+})
+
+test_that("values of UQS() spliced into expression", {
+  f <- quo(f(a, UQS(list(quote(b), quote(c))), d))
+  expect_identical(f, quo(f(a, b, c, d)))
+})
+
+test_that("names within UQS() are preseved", {
+  f <- quo(f(UQS(list(a = quote(b)))))
+  expect_identical(f, quo(f(a = b)))
+})
+
+test_that("UQS() handles language objects", {
+  expect_identical(quo(list(UQS(quote(foo)))), quo(list(foo)))
+  expect_identical(quo(list(UQS(quote({ foo })))), quo(list(foo)))
+})
+
+test_that("splicing an empty vector works", {
+  expect_identical(expr_interp(~list(!!! list())), quo(list()))
+  expect_identical(expr_interp(~list(!!! character(0))), quo(list()))
+  expect_identical(expr_interp(~list(!!! NULL)), quo(list()))
+})
+
+
+# UQE ----------------------------------------------------------------
+
+test_that("UQE() extracts right-hand side", {
+  var <- ~cyl
+  expect_identical(quo(mtcars$UQE(var)), quo(mtcars$cyl))
+  expect_identical(quo(mtcars$`!!`(var)), quo(mtcars$cyl))
+})
+
+
+# bang ---------------------------------------------------------------
+
+test_that("single ! is not treated as shortcut", {
+  expect_identical(quo(!foo), as_quosure(~!foo))
+})
+
+test_that("double and triple ! are treated as syntactic shortcuts", {
+  var <- local(quo(foo))
+  expect_identical(quo(!! var), as_quosure(var))
+  expect_identical(quo(!! quo(foo)), quo(foo))
+  expect_identical(quo(list(!!! letters[1:3])), quo(list("a", "b", "c")))
+})
+
+test_that("`!!` works in prefixed calls", {
+  var <- ~cyl
+  expect_identical(expr_interp(~mtcars$`!!`(var)), quo(mtcars$cyl))
+  expect_identical(expr_interp(~foo$`!!`(quote(bar))), quo(foo$bar))
+  expect_identical(expr_interp(~base::`!!`(~list)()), quo(base::list()))
+})
+
+
+# quosures -----------------------------------------------------------
+
+test_that("quosures are created for all informative formulas", {
+  foo <- local(quo(foo))
+  bar <- local(quo(bar))
+
+  interpolated <- local(quo(list(!!foo, !!bar)))
+  expected <- new_quosure(lang("list", as_quosure(foo), as_quosure(bar)), env = get_env(interpolated))
+  expect_identical(interpolated, expected)
+
+  interpolated <- quo(!!interpolated)
+  expect_identical(interpolated, expected)
+})
+
+
+# dots_values() ------------------------------------------------------
+
+test_that("can unquote-splice symbols", {
+  expect_identical(ll(!!! list(quote(`_symbol`))), list(quote(`_symbol`)))
+})
+
+test_that("can unquote symbols", {
+  expect_identical(dots_values(!! quote(.)), named_list(quote(.)))
+  expect_identical(dots_values(rlang::UQ(quote(.))), named_list(quote(.)))
+})
diff --git a/tests/testthat/test-types-coercion.R b/tests/testthat/test-types-coercion.R
new file mode 100644
index 0000000..a802b2e
--- /dev/null
+++ b/tests/testthat/test-types-coercion.R
@@ -0,0 +1,50 @@
+context("types-coercion")
+
+test_that("no method dispatch", {
+  as.logical.foo <- function(x) "wrong"
+  expect_identical(as_integer(structure(TRUE, class = "foo")), 1L)
+
+  as.list.foo <- function(x) "wrong"
+  expect_identical(as_list(structure(1:10, class = "foo")), as.list(1:10))
+})
+
+test_that("input is left intact", {
+  x <- structure(TRUE, class = "foo")
+  y <- as_integer(x)
+  expect_identical(x, structure(TRUE, class = "foo"))
+})
+
+test_that("as_list() zaps attributes", {
+  expect_identical(as_list(structure(list(), class = "foo")), list())
+})
+
+test_that("as_list() only coerces vector or dictionary types", {
+  expect_identical(as_list(1:3), list(1L, 2L, 3L))
+  expect_error(as_list(quote(symbol)), "a symbol to a list")
+})
+
+test_that("as_list() bypasses environment method and leaves input intact", {
+  as.list.foo <- function(x) "wrong"
+  x <- structure(child_env(NULL), class = "foo")
+  y <- as_list(x)
+
+  expect_is(x, "foo")
+  expect_identical(y, set_names(list(), character(0)))
+})
+
+test_that("as_integer() and as_logical() require integerish input", {
+  expect_error(as_integer(1.5), "a fractional double vector to an integer vector")
+  expect_error(as_logical(1.5), "a fractional double vector to a logical vector")
+})
+
+test_that("names are preserved", {
+  nms <- as.character(1:3)
+  x <- set_names(1:3, nms)
+  expect_identical(names(as_double(x)), nms)
+  expect_identical(names(as_list(x)), nms)
+})
+
+test_that("can convert strings (#138)", {
+  expect_identical(as_character("a"), "a")
+  expect_identical(as_list("a"), list("a"))
+})
diff --git a/tests/testthat/test-types.R b/tests/testthat/test-types.R
new file mode 100644
index 0000000..6724b70
--- /dev/null
+++ b/tests/testthat/test-types.R
@@ -0,0 +1,59 @@
+context("types")
+
+test_that("predicates match definitions", {
+  expect_true(is_character(letters, 26))
+  expect_false(is_character(letters, 1))
+  expect_false(is_list(letters, 26))
+
+  expect_true(is_list(mtcars, 11))
+  expect_false(is_list(mtcars, 0))
+  expect_false(is_double(mtcars, 11))
+})
+
+test_that("can bypass string serialisation", {
+  bar <- chr(list("cafe", string(c(0x63, 0x61, 0x66, 0xE9))), .encoding = "latin1")
+  bytes <- list(bytes(c(0x63, 0x61, 0x66, 0x65)), bytes(c(0x63, 0x61, 0x66, 0xE9)))
+  expect_identical(map(bar, as_bytes), bytes)
+  expect_identical(str_encoding(bar[[2]]), "latin1")
+})
+
+test_that("pattern match on string encoding", {
+  expect_true(is_character(letters, encoding = "unknown"))
+  expect_false(is_character(letters, encoding = "UTF-8"))
+
+  chr <- chr(c("foo", "fo\uE9"))
+  expect_false(is_character(chr, encoding = "UTF-8"))
+  expect_false(is_character(chr, encoding = "unknown"))
+  expect_true(is_character(chr, encoding = c("unknown", "UTF-8")))
+})
+
+test_that("type_of() returns correct type", {
+  expect_identical(type_of("foo"), "string")
+  expect_identical(type_of(letters), "character")
+  expect_identical(type_of(base::`$`), "primitive")
+  expect_identical(type_of(base::list), "primitive")
+  expect_identical(type_of(base::eval), "closure")
+  expect_identical(type_of(~foo), "formula")
+  expect_identical(type_of(quo(foo)), "formula")
+  expect_identical(type_of(a := b), "definition")
+  expect_identical(type_of(quote(foo())), "language")
+})
+
+test_that("lang_type_of() returns correct lang subtype", {
+  expect_identical(lang_type_of(quote(foo())), "named")
+  expect_identical(lang_type_of(quote(foo::bar())), "namespaced")
+  expect_identical(lang_type_of(quote(foo at bar())), "recursive")
+
+  lang <- quote(foo())
+  mut_node_car(lang, 10)
+  expect_error(lang_type_of(lang), "corrupt")
+
+  mut_node_car(lang, base::list)
+  expect_identical(lang_type_of(lang), "inlined")
+})
+
+test_that("types are friendly", {
+  expect_identical(friendly_type("character"), "a character vector")
+  expect_identical(friendly_type("integer"), "an integer vector")
+  expect_identical(friendly_type("language"), "a call")
+})
diff --git a/tests/testthat/test-utils.R b/tests/testthat/test-utils.R
new file mode 100644
index 0000000..7da3a53
--- /dev/null
+++ b/tests/testthat/test-utils.R
@@ -0,0 +1,12 @@
+context("utils")
+
+test_that("locale setters report old locale", {
+  tryCatch(
+    old <- suppressMessages(mut_mbcs_locale()),
+    warning = function(e) skip("Cannot set MBCS locale")
+  )
+
+  mbcs <- suppressMessages(mut_latin1_locale())
+  suppressMessages(Sys.setlocale("LC_CTYPE", old))
+  expect_true(tolower(mbcs) %in% tolower(c("ja_JP.SJIS", "English_United States.932")))
+})
diff --git a/tests/testthat/test-vector.R b/tests/testthat/test-vector.R
new file mode 100644
index 0000000..7036b06
--- /dev/null
+++ b/tests/testthat/test-vector.R
@@ -0,0 +1,187 @@
+context("vector")
+
+test_that("vector is modified", {
+  x <- c(1, b = 2, c = 3, 4)
+  out <- modify(x, 5, b = 20, splice(list(6, c = "30")))
+  expect_equal(out, list(1, b = 20, c = "30", 4, 5, 6))
+})
+
+test_that("are_na() requires vector input but not is_na()", {
+  expect_error(are_na(base::eval), "must be a vector")
+  expect_false(is_na(base::eval))
+})
+
+test_that("atomic vectors are spliced", {
+  lgl <- lgl(TRUE, c(TRUE, FALSE), list(FALSE, FALSE))
+  expect_identical(lgl, c(TRUE, TRUE, FALSE, FALSE, FALSE))
+
+  int <- int(1L, c(2L, 3L), list(4L, 5L))
+  expect_identical(int, 1:5)
+
+  dbl <- dbl(1, c(2, 3), list(4, 5))
+  expect_identical(dbl, as_double(1:5))
+
+  cpl <- cpl(1i, c(2i, 3i), list(4i, 5i))
+  expect_identical(cpl, c(1i, 2i, 3i, 4i, 5i))
+
+  chr <- chr("foo", c("foo", "bar"), list("buz", "baz"))
+  expect_identical(chr, c("foo", "foo", "bar", "buz", "baz"))
+
+  raw <- bytes(1, c(2, 3), list(4, 5))
+  expect_identical(raw, bytes(1:5))
+})
+
+test_that("can create empty vectors", {
+  expect_identical(lgl(), logical(0))
+  expect_identical(int(), integer(0))
+  expect_identical(dbl(), double(0))
+  expect_identical(cpl(), complex(0))
+  expect_identical(chr(), character(0))
+  expect_identical(bytes(), raw(0))
+  expect_identical(ll(), list())
+})
+
+test_that("objects are not spliced", {
+  expect_error(lgl(structure(list(TRUE, TRUE), class = "bam")), "Can't splice S3 objects")
+})
+
+test_that("explicitly spliced lists are spliced", {
+  expect_identical(lgl(FALSE, structure(list(TRUE, TRUE), class = "spliced")), c(FALSE, TRUE, TRUE))
+})
+
+test_that("splicing uses inner names", {
+  expect_identical(lgl(c(a = TRUE, b = FALSE)), c(a = TRUE, b = FALSE))
+  expect_identical(lgl(list(c(a = TRUE, b = FALSE))), c(a = TRUE, b = FALSE))
+})
+
+test_that("splicing uses outer names when scalar", {
+  expect_identical(lgl(a = TRUE, b = FALSE), c(a = TRUE, b = FALSE))
+  expect_identical(lgl(list(a = TRUE, b = FALSE)), c(a = TRUE, b = FALSE))
+})
+
+test_that("warn when outer names unless input is unnamed scalar atomic", {
+  expect_warning(expect_identical(dbl(a = c(1, 2)), c(1, 2)), "Outer names")
+  expect_warning(expect_identical(dbl(list(a = c(1, 2))), c(1, 2)), "Outer names")
+  expect_warning(expect_identical(dbl(a = c(A = 1)), c(A = 1)), "Outer names")
+  expect_warning(expect_identical(dbl(list(a = c(A = 1))), c(A = 1)), "Outer names")
+})
+
+test_that("warn when spliced lists have outer name", {
+  expect_warning(lgl(list(c = c(cc = FALSE))), "Outer names")
+})
+
+test_that("ll() doesn't splice bare lists", {
+  expect_identical(ll(list(1, 2)), list(list(1, 2)))
+  expect_identical(ll(splice(list(1, 2))), list(1, 2))
+})
+
+test_that("atomic inputs are implicitly coerced", {
+  expect_identical(lgl(10L, FALSE, list(TRUE, 0L, 0)), c(TRUE, FALSE, TRUE, FALSE, FALSE))
+  expect_identical(dbl(10L, 10, TRUE, list(10L, 0, TRUE)), c(10, 10, 1, 10, 0, 1))
+
+  expect_error(lgl("foo"), "Can't convert a string to a logical vector")
+  expect_error(chr(10), "Can't convert a double vector to a character vector")
+})
+
+test_that("type errors are handled", {
+  expect_error(lgl(get_env()), "Can't convert an environment to a logical vector")
+  expect_error(lgl(list(get_env())), "Can't convert an environment to a logical vector")
+})
+
+test_that("empty inputs are spliced", {
+  expect_identical(lgl(NULL, lgl(), list(NULL, lgl())), lgl())
+  expect_warning(regexp = NA, expect_identical(lgl(a = NULL, a = lgl(), list(a = NULL, a = lgl())), lgl()))
+})
+
+test_that("ll() splices names", {
+  expect_identical(ll(a = TRUE, b = FALSE), list(a = TRUE, b = FALSE))
+  expect_identical(ll(c(A = TRUE), c(B = FALSE)), list(c(A = TRUE), c(B = FALSE)))
+  expect_identical(ll(a = c(A = TRUE), b = c(B = FALSE)), list(a = c(A = TRUE), b = c(B = FALSE)))
+})
+
+
+# Squashing ----------------------------------------------------------
+
+test_that("vectors and names are squashed", {
+  expect_identical(
+    squash_dbl(list(a = 1e0, list(c(b = 2e1, c = 3e1), d = 4e1, list(5e2, list(e = 6e3, c(f = 7e3)))), 8e0)),
+    c(a = 1e0, b = 2e1, c = 3e1, d = 4e1, 5e2, e = 6e3, f = 7e3, 8e0)
+  )
+})
+
+test_that("bad outer names warn even at depth", {
+  expect_warning(regex = "Outer names",
+    expect_identical(squash_dbl(list(list(list(A = c(a = 1))))), c(a = 1))
+  )
+})
+
+test_that("lists are squashed", {
+  expect_identical(squash(list(a = 1e0, list(c(b = 2e1, c = 3e1), d = 4e1, list(5e2, list(e = 6e3, c(f = 7e3)))), 8e0)), list(a = 1, c(b = 20, c = 30), d = 40, 500, e = 6000, c(f = 7000), 8))
+})
+
+test_that("squash_if() handles custom predicate", {
+  is_foo <- function(x) inherits(x, "foo") || is_bare_list(x)
+  foo <- set_attrs(list("bar"), class = "foo")
+  x <- list(1, list(foo, list(foo, 100)))
+  expect_identical(squash_if(x, is_foo), list(1, "bar", "bar", 100))
+})
+
+
+# Flattening ---------------------------------------------------------
+
+test_that("vectors and names are flattened", {
+  expect_identical(flatten_dbl(list(a = 1, c(b = 2), 3)), c(a = 1, b = 2, 3))
+  expect_identical(flatten_dbl(list(list(a = 1), list(c(b = 2)), 3)), c(a = 1, b = 2, 3))
+  expect_error(flatten_dbl(list(1, list(list(2)), 3)), "Can't convert")
+})
+
+test_that("bad outer names warn when flattening", {
+  expect_warning(expect_identical(flatten_dbl(list(a = c(A = 1))), c(A = 1)), "Outer names")
+  expect_warning(expect_identical(flatten_dbl(list(a = 1, list(b = c(B = 2)))), c(a = 1, B = 2)), "Outer names")
+})
+
+test_that("lists are flattened", {
+  x <- list(1, list(2, list(3, list(4))))
+  expect_identical(flatten(x), list(1, 2, list(3, list(4))))
+  expect_identical(flatten(flatten(x)), list(1, 2, 3, list(4)))
+  expect_identical(flatten(flatten(flatten(x))), list(1, 2, 3, 4))
+  expect_identical(flatten(flatten(flatten(flatten(x)))), list(1, 2, 3, 4))
+})
+
+test_that("flatten_if() handles custom predicate", {
+  obj <- set_attrs(list(1:2), class = "foo")
+  x <- list(obj, splice(obj), unclass(obj))
+
+  expect_identical(flatten_if(x), list(obj, obj[[1]], unclass(obj)))
+  expect_identical(flatten_if(x, is_bare_list), list(obj, splice(obj), obj[[1]]))
+
+  pred <- function(x) is_bare_list(x) || is_spliced(x)
+  expect_identical(flatten_if(x, pred), list(obj, obj[[1]], obj[[1]]))
+})
+
+test_that("flatten_if() handles external pointers", {
+  obj <- set_attrs(list(1:2), class = "foo")
+  x <- list(obj, splice(obj), unclass(obj))
+
+  expect_identical(flatten_if(x, test_is_spliceable), list(obj[[1]], splice(obj), unclass(obj)))
+
+  ptr <- test_is_spliceable[[1]]
+  expect_identical(flatten_if(x, ptr), list(obj[[1]], splice(obj), unclass(obj)))
+
+  expect_is(test_is_spliceable, "fn_pointer")
+})
+
+test_that("flatten() splices names", {
+  expect_warning(regexp = "Outer names",
+    expect_identical(
+      flatten(list(a = list(A = TRUE), b = list(B = FALSE))) ,
+      list(A = TRUE, B = FALSE)
+    )
+  )
+  expect_warning(regexp = "Outer names",
+    expect_identical(
+      flatten(list(a = list(TRUE), b = list(FALSE))) ,
+      list(TRUE, FALSE)
+    )
+  )
+})
diff --git a/vignettes/releases/rlang-0.1.Rmd b/vignettes/releases/rlang-0.1.Rmd
new file mode 100644
index 0000000..b59ccdf
--- /dev/null
+++ b/vignettes/releases/rlang-0.1.Rmd
@@ -0,0 +1,202 @@
+---
+title: "rlang 0.1"
+---
+
+```{r setup, include = FALSE}
+library("rlang")
+knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
+```
+
+It is with great pleasure that we announce the first release of rlang.
+This package provides tools for working with core language features of
+R and the tidyverse. You can install it by running:
+
+```{r, eval = FALSE}
+install.packages("rlang")
+```
+
+(rlang is not currently installed with the tidyverse package, but it will be in the near future.)
+
+rlang includes a large number of tools, and we'll be working to describe and document them clearly in the future. In this blog post, we'll introduce the "tidy evaluation" framework, and discuss some of the design principles that underlie rlang. You can learn more at <http://rlang.tidyverse.org>.
+
+## Tidy evaluation
+
+Tidy evaluation, or __tidyeval__ for short, is a new approach to non-standard
+evaluation (NSE) that will be implemented in all tidyverse grammars, including
+dplyr, tidyr, and ggplot2. 
+
+Tidyeval is built on top of three key tools:
+
+*   __Quosures__, a data structure that captures both an expression and its
+    environment. Quosures are a subtype of formulas that have special
+    support in tidyeval grammars. You create quosures with `quo()`, 
+    `enquo()` and `quos()`. 
+
+*   __[Quasiquotation][quasiq]__, a tool that lets you "unquote", or 
+    evaluate, values in the middle of expressions that are otherwised quoted.
+
+*   Tools for evaluating expressions containing quosures: `eval_tidy()`
+    and `as_overscope()`. This is what you will need to create your own
+    grammars.
+    
+The complete system is too much to describe in a blog post, so there are two places to learn more:
+
+* To learn how tidyeval will help you program with data analysis grammars
+  read [programming with dplyr][tidyeval-dplyr], a vignette that will be
+  included in the upcoming dplyr release.
+  
+* To learn more about the theory behind tidyeval, read the 
+  [tidy evaluation][tidyeval-rlang] vignette included in rlang.
+
+## Features and principles in rlang
+
+Many rlang functions overlap with base R functions: the goal of rlang, like many tidyverse packages, is not to allow you to do fundamentally new things, but to do things with greater ease. One way that rlang makes your life easier is by adopting a consistent set of principles that thread throughout the package.
+
+We describe four important principles below:
+
+* Splicing and unquoting syntax.
+* Pattern-matching predicates.
+* Naming conventions.
+* Comprehensive documentation.
+
+### Splicing and unquoting syntax
+
+All rlang functions taking `...` support a special syntax for splicing
+and unquoting. For example, take the `lang()` function which creates 
+unevaluated function calls (it's similar to `base::call()`). The first 
+argument is the name of the function to call, and the subsequent
+arguments are the arguments to that function:
+
+```{r}
+lang("foo", x = 1, y = "a", z = TRUE)
+```
+
+What happens if you already have the arguments in a list?
+
+```{r}
+args <- list(x = 1, y = "a", z = TRUE)
+lang("foo", args)
+```
+
+You can use the unquote-splice operator, `!!!`, to splice the contents of the list in:
+
+```{r}
+lang("foo", splice(args))
+```
+
+To use this in your own code, call `dots_list()`:
+
+```{r}
+capture_dots <- function(...) {
+  dots_list(...)
+} 
+
+str(capture_dots(a = 1, b = 2, c = 3))
+str(capture_dots(!!! list(a = 1, b = 2), c = 3))
+```
+
+Using `dots_list()` means that you don't need to provide an extra argument that takes an explicit list, or relying on your users knowing how to correctly use `do.call()`. 
+
+### Pattern-matching predicates
+
+purrr provides an extensive set of predicate functions like `is_character()` and `is_list()` that make it easy to check that arguments are the type that you expect. 
+
+There are two main differences compared to base R equivalents. Firstly, they are less surprisingly: for example `is_vector(factor("a"))` returns `TRUE` and `is_atomic(NULL)` returns `FALSE`. They also have arguments that allow you to check other properties. For example, you can check that
+vectors have a given length:
+
+```{r}
+is_list(mtcars)
+is_list(mtcars, n = 10)
+is_list(mtcars, n = 11)
+```
+
+This particularly useful for more complex types like calls where you can check the number of arguments (`n`), the function `name`, or its namespace (`ns`):
+
+```{r}
+call <- quote(base::foo(bar, baz))
+
+is_lang(call, n = 3)
+is_lang(call, n = 2)
+
+is_lang(call, name = "bar")
+is_lang(call, name = "foo")
+
+is_lang(call, ns = "rlang")
+is_lang(call, ns = "base")
+
+is_lang(call, "foo", n = 2, ns = "base")
+```
+
+### Consistent naming
+
+rlang uses strong naming conventions to make it easier to remember what a function does, to support autocomplete, and to hopefully make it easier to guess the name of a function.
+
+*   Prefixes and suffixes for input and output type:
+
+    rlang tries to follow the general rule that prefixes designate the
+    input type of a function while suffixes indicate the output type.
+    For instance, `env_bind()` takes an environment while `pkg_env()`
+    returns one.
+
+*   Side-effects of setter functions: If an rlang setter starts with `set_`, 
+    it means it doesn't have side effects; it returns a modified object. If 
+    it starts with `mut_`, it changes its input in place.
+
+*   Constructors. If a constructor takes dots, it is named after the output
+    type:
+
+    ```{r}
+    env(x = 1)
+    chr(x = "a")
+    lang("foo", x = NULL)
+    ```
+    
+    On the other hand, if it takes components as formed objects, it is
+    prefixed with `new_`:
+    
+    ```{r}
+    new_function(list(x = NULL), quote({ x }))
+    ```
+
+*   Scalar versus vectorised functions.
+
+    What's the difference between `has_name()` and `have_name()`? The
+    former is a scalar predicate while the latter is vectorised:
+    
+    ```{r}
+    has_name(mtcars, "cyl")
+    have_name(mtcars)
+    have_name(c(a = 1, 2))
+    ```
+    
+    For that reason, `is_na()` is different from the base R function
+    `is.na()`: it is a scalar predicate. On the other hand, `are_na()` is
+    a vector predicate.
+    
+    ```{r}
+    x <- c(1L, 2L, NA, 3L)
+    is_na(x)
+    are_na(x)
+    ```
+    
+    This consistency is a helpful hint to beginners as it's often hard to
+    know if a function is vectorised.
+
+### Comprehensive documentation
+
+rlang's documentation is intended to be didactic and introduce
+mid-level R programmers to deeper concepts and features of the
+language. For instance:
+
+- `?env` provides an introduction to scoping issues in R.
+
+- `?lang` and `?pairlist` explain the structure of R expressions.
+
+- `?cnd_signal`, `?with_handlers`, and `?exiting` go over the
+  condition system in R.
+
+Writing good documentation is hard, so expect these to get better over time.
+
+[tidyeval-dplyr]: http://dplyr.tidyverse.org/articles/programming.html
+[tidyeval-rlang]: http://rlang.tidyverse.org/articles/tidy-evaluation.html
+[quasiq]: http://rlang.tidyverse.org/reference/quasiquotation.html
diff --git a/vignettes/tidy-evaluation.Rmd b/vignettes/tidy-evaluation.Rmd
new file mode 100644
index 0000000..0bb86fc
--- /dev/null
+++ b/vignettes/tidy-evaluation.Rmd
@@ -0,0 +1,263 @@
+---
+title: "Tidy evaluation"
+output: rmarkdown::html_vignette
+vignette: >
+  %\VignetteIndexEntry{Tidy evaluation}
+  %\VignetteEngine{knitr::rmarkdown}
+  \usepackage[utf8]{inputenc}
+---
+
+```{r, include = FALSE}
+knitr::opts_chunk$set(collapse = T, comment = "#>")
+library("rlang")
+```
+
+Tidy evaluation is is a general toolkit for non-standard evaluation,  principally used to create domain-specific languages of grammars. The most prominent examples of such sublanguages in R are modelling specifications with formulas (`lm()`,
+`lme4::lmer()`, etc) and data manipulation grammars (dplyr,
+tidyr). Most of these DSLs put dataframe columns in scope so that
+users can refer to them directly, saving keystrokes during interactive
+analysis and creating easily readable code.
+
+R makes it easy to create DSLs thanks to three features of the
+language:
+
+- R code is first-class. That is, R code can be manipulated like
+  any other object (see `sym()`, `lang()`, and `node()`). We use
+  the term __expression__ (see `is_expr()`) to refer to objects that
+  are created by parsing R code.
+
+- Scope is first-class. Scope is the lexical environment that
+  associates values to symbols in expressions. Unlike like most 
+  languages, environments can be created (see `env()`) and manipulated 
+  as regular objects.
+
+- Finally, functions can capture the expressions that were supplied
+  as arguments instead of being passed the value of these
+  expressions (see `enquo()` and `enexpr()`).
+
+R functions can capture expressions, manipulate them like
+regular objects, and alter the meaning of symbols referenced in these
+expressions by changing the scope (the environment) in which they
+are evaluated. This combination of features allow R packages to change
+the meaning of R code and create domain-specific sublanguages.
+
+Tidy evaluation is an opinionated way to use these
+features to create consistent DSLs. The main principle is that
+sublanguages should feel and behave like R code. They change the
+meaning of R code, but only in a precise and circumscribed way,
+behaving otherwise predictably and in accordance with R semantics. As
+a result, users are be able to leverage their existing knowledge of R
+programming to solve problems involving the sublanguage in ways that
+were not necessarily envisioned or planned by their designers.
+
+## Parsing versus evaluation
+
+There are two ways of dealing with unevaluated expressions to create a
+sublanguage. The first is to parse the expression and modify it, and 
+the other is to leave the expression as is and evaluate it in a
+modified environment.
+
+Let's take the example of designing a modelling DSL to illustrate
+parsing. You would need to traverse the call and analyse all functions
+encountered in the expression (in particular, operators like `+` or
+`:`), building a data structure describing a model as you go. This
+method of dealing with expressions is complex, rigid, and error prone
+because you're basically writing an interpreter for R code. It is
+extremely difficult to emulate R semantics when parsing an expression:
+does a function take arguments by value or by expression? Can I parse
+these arguments? Do these symbols mean the same thing in this context?
+Will this argument be evaluated immediately or later on lazily? Given
+the difficulty of getting it right, parsing should be a last resort.
+
+The second way is to rely on evaluation in a specific environment.
+The expression is evaluated in an environment where certain objects
+and functions are given special definitions. For instance `+` might be
+defined as accumulating vectors in a data structure to build a design
+matrix later on, or we might put helper functions in scope (an example
+is `dplyr::select()`). As this method is relying on the R interpreter,
+the grammar is much more likely to behave like real R code.
+
+R DSLs are traditionally implemented with a mix of both principles.
+Expressions are parsed in ad hoc ways, but are eventually evaluated in
+an environment containing dataframe columns. While it is difficult to
+completely avoid ad hoc parsing, tidyeval DSLs strive to rely on
+evaluation as much as possible.
+
+## Values versus expressions
+
+A corollary of emphasising evaluation is that your DSL functions
+should understand _values_ in addition to expressions. This is
+especially important with
+[quasiquotation][quasiquotation]:
+users can bypass symbolic evaluation completely by unquoting values. For
+instance, the following expressions are completely equivalent:
+
+```{r, eval = FALSE}
+# Taking an expression:
+dplyr::mutate(mtcars, cyl2 = cyl * 2)
+
+# Taking a value:
+var <- mtcars$cyl * 2
+dplyr::mutate(mtcars, cyl2 = !! var)
+```
+
+`dplyr::mutate()` evaluates expressions in a context where dataframe
+columns are in scope, but it accepts any value that can be treated as
+a column (a recycled scalar or a vector as long as there are rows).
+
+A more complex example is `dplyr::select()`. This function evaluates
+dataframe columns in a context where they represent column
+positions. Therefore, `select()` understands column symbols like
+`cyl`:
+
+```{r, eval = FALSE}
+# Taking a symbol:
+dplyr::select(mtcars, cyl)
+
+# Taking an unquoted symbol:
+var <- quote(sym)
+dplyr::select(mtcars, !! var)
+```
+
+But it also understands column positions:
+
+```{r, eval = FALSE}
+# Taking a column position:
+dplyr::select(mtcars, 2)
+
+# Taking an unquoted column position:
+var <- 2
+dplyr::select(mtcars, !! var)
+```
+
+Understanding values in addition to expressions makes your grammar
+more consistent, predictable, and programmable.
+
+## Tidy scoping
+
+The special type of scoping found in R grammars implemented with
+evaluation poses some challenges. Both objects from a dataset and
+objects from the current environment should be in scope, with the
+former having precedence over the latter. In other words, the dataset
+should __overscope__ the dynamic context. The traditional solution to
+this issue in R is to transform a dataframe to an environment and set
+the calling frame as the parent environment.  This way, the symbols
+appearing in the expression can refer to their surrounding context in
+addition to dataframe columns. In other words, the grammar implements
+correctly an important aspect of R:
+[lexical scoping](http://adv-r.had.co.nz/Functions.html#lexical-scoping).
+
+Creating this scope hierarchy (data first, context next) is possible
+because R makes it easy to capture the calling environment (see
+[caller_env()]). However, this supposes that captured expressions were
+actually typed in the most immediate caller frame. This assumption
+easily breaks in R. First because quasiquotation allows an user to
+combine expressions that do not necessarily come from the same lexical
+context. Secondly because arguments can be forwarded through the
+special `...` argument.  While base R does not provide any way of
+capturing a forwarded argument along with its original environment,
+rlang features [quos()] for this purpose. This function looks up each
+forwarded arguments and returns a list of [quosures][quosure] that bundle the
+expressions with their own dynamic environments.
+
+In that context, maintaining scoping consistency is a challenge
+because we're dealing with multiple environments, one for each
+argument plus one containing the overscoped data. This creates
+difficulties regarding tidyeval's overarching principle that we should
+change R semantics through evaluation. It is possible to evaluate each
+expression in turn, but how can we combine all expressions into one
+and evaluate it tidily at once? An expression can only be evaluated in
+a single environment. This is where quosures come into play.
+
+
+## Quosures and overscoping
+
+Unlike formulas, quosures aren't simple containers of an expression
+and an environment. In the tidyeval framework, they have the property
+of self-evaluating in their own environment. Hence they can appear
+anywhere in an expression (e.g. by being
+[unquoted](http://rlang.tidyverse.org/reference/quasiquotation.html)),
+carrying their own environment and behaving otherwise exactly like
+surrounding R code. Quosures behave like
+reified
+[promises](http://adv-r.had.co.nz/Computing-on-the-language.html#capturing-expressions) that
+are unreified during tidy evaluation.
+
+However, the dynamic environments of quosures do not contain
+overscoped data. It's not of much use for sublanguages to get the
+contextual environment right if they can't also change the meaning
+of code quoted in quosures. To solve this issue, tidyeval rechains
+the overscope to a quosure just before it self-evaluates. This way,
+both the lexical environment and the overscoped data are in scope
+when the quosure is evaluated. It is evaluated tidily.
+
+In practical terms, `eval_tidy()` takes a `data` argument and
+creates an overscope suitable for tidy evaluation. In particular,
+these overscopes contain definitions for self-evaluation of
+quosures. See [eval_tidy_()] and [as_overscope] for more flexible
+ways of creating overscopes.
+
+## Theory
+
+The most important concept of the tidy evaluation framework is that
+expressions should be scoped in their dynamic context. This issue
+is linked to the computer science concept of _hygiene_, which
+roughly means that symbols should be scoped in their local context,
+the context where they are typed by the user. In a way, hygiene is
+what "tidy" refers to in "tidy evaluation".
+
+In languages with macros, hygiene comes up for [macro
+expansion](https://en.wikipedia.org/wiki/Hygienic_macro). While
+macros look like R's non-standard evaluation functions, and share
+certain concepts with them (in particular, they get their arguments
+as unevaluated code), they are actually quite different. Macros are
+compile-time and therefore can only operate on code and constants,
+never on user data. They also don't return a value but are expanded
+in place by the compiler. In comparison, R does not have macros but
+it has [fexprs](https://en.wikipedia.org/wiki/Fexpr), i.e. regular
+functions that get arguments as unevaluated expressions rather than
+by their value (fexprs are what we call NSE functions in the R
+community). Unlike macros, these functions execute at run-time and
+return a value.
+
+Symbolic hygiene is a problem for macros during expansion because
+expanded code might invisibly redefine surrounding symbols.
+Correspondingly, hygiene is an issue for NSE functions if the code
+they captured gets evaluated in the wrong
+environment. Historically, fexprs did not have this problem because
+they existed in languages with dynamic scoping. However in modern
+languages with lexical scoping, it is imperative to bundle quoted
+expressions with their dynamic environment. The most natural way
+to do this in R is to use formulas and quosures.
+
+While formulas were introduced in the S language, the quosure was
+invented much later for R [by Luke Tierney in
+2000](https://github.com/wch/r-source/commit/a945ac8e6a82617205442d44a2be3a497d2ac896).
+From that point on formulas recorded their environment along with
+the model terms. In the Lisp world, the Kernel Lisp language also
+recognised that arguments should be captured together with their
+dynamic environment in order to solve hygienic evaluation in the
+context of lexically scoped languages (see chapter 5 of [John
+Schutt's thesis](https://web.wpi.edu/Pubs/ETD/Available/etd-090110-124904/)).
+However, Kernel Lisp did not have quosures and avoided quotation or
+quasiquotation operators altogether to avoid scoping issues.
+
+Tidyeval contributes to the problem of hygienic evaluation in four ways:
+
+- Promoting the quosure as the proper quotation data structure, in
+  order to keep track of the dynamic environment of quoted
+  expressions.
+
+- Introducing systematic quasiquotation in all capturing functions
+  in order to make it straightforward to program with these
+  functions.
+
+- Treating quosures as reified promises that self-evaluate within
+  their own environments. This allows unquoting quosures within
+  other quosures, which is the key for programming hygienically
+  with capturing functions.
+
+- Building a moving overscope that rechains to quosures as they get
+  evaluated. This makes it possible to change the evaluation
+  context and at the same time take the lexical context of each
+  quosure into account.

-- 
Alioth's /usr/local/bin/git-commit-notice on /srv/git.debian.org/git/debian-med/r-cran-rlang.git



More information about the debian-med-commit mailing list