Skip to contents

In this demo I cover the fundamental end-to-end workflow involved in creating and managing tailored job applications using autocv.

A minimal reproducible example

The following end-to-end example runs in a temporary environment on your local machine, constructing a project directory, an application directory, a resume, and keyword reports, all from the autocv-supplied example job posting, template datasets, and template stylesheets.

autocv::in_tmp_env({
  message("1. Build the project directory...")
  autocv::build_base_directory()

  data(example_job_metadata)
  app_df <- autocv::construct_app_metadata(app_info = example_job_metadata)

  message("")
  message("2. Build the application...")
  autocv::build_app_directory(app_df = app_df, open = FALSE)

  message("")
  message("3. Render the application...")
  # autocv::render_app(cover = FALSE, email = FALSE)
  autocv::render_resume_plain()

  message("")
  message("4. Check application keywords...")
  report_df <- autocv::check_skills()
  print(report_df)

  message("")
  message("5. Inspect application metadata...")
  app_df <- autocv::get_app_info()
  print(app_df)
})

A step-by-step example

Let us now walk through each of these steps in greater depth, keeping track of the evolving file tree and copying the generated documents from the temporary environment to your working directory for reference later.

Step 1: Build the project directory

We begin by creating the basic file tree upon which to build tailored job applications. The file tree includes placeholder output/ and applications/ directories that store generated documents, as well as an input/ directory containing the template spreadsheets resume_data.xlsx (see example_skill_data and example_position_data), cover_data.xlsx (see example_contact_data and example_text_data), and job_metadata.yml (example_job_metadata). A .Rprofile file containing path environment variables is automatically generated, along with an Rproj file and the notebooks resume.Rmd and cover.Rmd, which are used to render job application documents from data contained in the aforementioned spreadsheets.

autocv::in_tmp_env({
  autocv::build_base_directory()
})
#> 
#> ── Setting project paths ───────────────────────────────────────────────────────
#> 
#> Your current path is: /tmp/RtmpkTOdSk/
#> Set your desired project path relative to your current path:
#> 
#> You entered the path: /tmp/RtmpkTOdSk/
#> 
#> Setting your project root to: /tmp/RtmpkTOdSk/
#> Created file .here in /tmp/RtmpkTOdSk . Please start a new R session in the new project directory.
#>  Writing file: .Rprofile
#>  Writing file: mycv.Rproj
#> 
#>  Writing to: ../../../../../../tmp/RtmpkTOdSk
#> 
#> ── Creating directory tree ─────────────────────────────────────────────────────
#> ! Folder already exists: . (skipping)
#>  Creating folder: R
#>  Creating folder: R/input
#>  Creating folder: R/output
#>  Creating folder: R/applications
#> 
#> ── Writing skeleton data files ─────────────────────────────────────────────────
#>  Writing file: R/input/resume_data.xlsx
#>  Writing file: R/input/cover_data.xlsx
#>  Writing file: R/input/job_metadata.yml
#> 
#> ── Writing resume-building notebooks ───────────────────────────────────────────
#>  Writing file: resume.Rmd
#>  Writing file: cv.Rmd
#> Current file tree:
#> /tmp/RtmpkTOdSk/.
#> ├── R
#> │   ├── applications
#> │   ├── input
#> │   │   ├── cover_data.xlsx
#> │   │   ├── job_metadata.yml
#> │   │   └── resume_data.xlsx
#> │   └── output
#> ├── cv.Rmd
#> ├── filea44b717a448b
#> ├── filea44b7c562b13
#> ├── filea44bba5b49
#> ├── filea44bc33638
#> ├── libloc_177_2e46c0fc1ffb1091.rds
#> ├── libloc_239_283e8cdc8a036cdc.rds
#> ├── libloc_246_4744b1836fb1a8fb.rds
#> ├── mycv.Rproj
#> ├── resume.Rmd
#> └── rmarkdown-stra44b54b49e21.html

Step 2: Build the application directory

With the project directory constructed and template data files copied over, we can now generate tailored job applications. To do so, you should update the fields in the input/job_metadata.yml file with information relevant to the job of interest, namely the id, company, position, and posting_url fields, as well as optionally populating the base_id, portal_url, linkedin_url, recruiter_email, and notes fields. For additional information, see example_job_metadata.

autocv::in_tmp_env({
  data(example_job_metadata)
  app_df <- autocv::construct_app_metadata(app_info = example_job_metadata)
  autocv::build_app_directory(app_df = app_df, open = FALSE)
})
#> 
#>  Writing to: ../../../../../../tmp/RtmpkTOdSk/R/applications/2024-07-data-science/2024-09-28-01-company-name-data-scientist-AB
#> 
#> ── Creating directory tree ─────────────────────────────────────────────────────
#>  Creating folder: input
#>  Creating folder: output
#> 
#> ── Copying base data files into directory ──────────────────────────────────────
#>  Writing file: input/resume_data_AB.xlsx
#>  Writing file: input/cover_data_AB.xlsx
#> 
#> ── Writing application metadata ────────────────────────────────────────────────
#>  Writing file: metadata_AB.yml
#>  Writing file: ../log.rds
#> 
#> ── Downloading job posting and building a skill report ─────────────────────────
#>  Writing file: input/posting_AB.txt
#> 
#> ── Keyword check: Posting vs job terms list ────────────────────────────────────
#>  Writing file: output/keyword_counts_AB.csv
#> # A tibble: 1 × 2
#>   term     matches
#>   <chr>      <dbl>
#> 1 Disabled       4
#> 
#> ── Keyword check: Posting vs data terms list ───────────────────────────────────
#>  Writing file: output/skill_counts_posting_AB.csv
#> # A tibble: 55 × 2
#>    term                        matches
#>    <chr>                         <dbl>
#>  1 Data Science                     73
#>  2 Data Analysis                    15
#>  3 UX                                1
#>  4 Statistics                       29
#>  5 Cloud Computing                   4
#>  6 Insights                         10
#>  7 Computer Science                  7
#>  8 Survey Data                       1
#>  9 Algorithms                        4
#> 10 Unstructured Data                 2
#> 11 Domain Knowledge                  3
#> 12 Mathematics                       3
#> 13 Data-Driven                       3
#> 14 Statistical Knowledge             1
#> 15 Data Visualization                3
#> 16 Machine Learning                  8
#> 17 C                                 8
#> 18 Classification                    3
#> 19 Data Mining                       4
#> 20 Statistical Learning              3
#> 21 Data Collection                   1
#> 22 Big Data                         13
#> 23 Exploratory Data Analysis         2
#> 24 Data Management                   2
#> 25 Data Cleaning                     1
#> 26 Predictive Models                 3
#> 27 Models                            2
#> 28 Data-Driven Decisions             1
#> 29 Statistical Analysis              2
#> 30 Data Preprocessing                1
#> 31 Feature Engineering               1
#> 32 Model Selection                   1
#> 33 Implementation                    1
#> 34 Analytical Techniques             1
#> 35 Analytical                        1
#> 36 Large Datasets                    2
#> 37 Non-Technical Audiences           1
#> 38 Critical Thinking                 1
#> 39 Data-Driven Decision-Making       1
#> 40 Data Analytics                    1
#> 41 Cloud Services                    1
#> 42 Python                            1
#> 43 R                                 2
#> 44 Data Engineering                  1
#> 45 HTML                              4
#> 46 Statistical Modeling              1
#> 47 Causal Inference                  1
#> 48 Collaboration                     1
#> 49 Signal Processing                 1
#> 50 Spark                             1
#> 51 Spark SQL                         1
#> 52 Data Ethics                       1
#> 53 ETL                               1
#> 54 ELT                               1
#> 55 Developers                        1
#> 
#>  Opening: .
#> Current file tree:
#> /tmp/RtmpkTOdSk/./R/applications
#> └── 2024-07-data-science
#>     ├── 2024-09-28-01-company-name-data-scientist-AB
#>     │   ├── input
#>     │   │   ├── cover_data_AB.xlsx
#>     │   │   ├── posting_AB.txt
#>     │   │   ├── report.xlsx
#>     │   │   └── resume_data_AB.xlsx
#>     │   ├── metadata_AB.yml
#>     │   └── output
#>     │       ├── keyword_counts_AB.csv
#>     │       └── skill_counts_posting_AB.csv
#>     └── log.rds

Step 3: Edit the application

After building the application directory, run autocv::edit_app(). This will open the spreadsheets, namely resume_data.xlsx and cover_data.xlsx, containing the data that will be used to generate your application documents. Edit the entries and metadata in these documents as desired to suit the job in question.

Step 4: Render the application

Once you’ve tailored the information in your application to the job in question, you are ready to render your application documents.

autocv::in_tmp_env({
  # autocv::render_app(cover = FALSE, email = FALSE)
  autocv::render_resume_plain()
})
#> 
#> ── Building resume.txt ─────────────────────────────────────────────────────────
#>  Writing file: output/resume_yourname_AB.txt
#> Current file tree:
#> /tmp/RtmpkTOdSk/./R/applications
#> └── 2024-07-data-science
#>     ├── 2024-09-28-01-company-name-data-scientist-AB
#>     │   ├── input
#>     │   │   ├── cover_data_AB.xlsx
#>     │   │   ├── posting_AB.txt
#>     │   │   ├── report.xlsx
#>     │   │   └── resume_data_AB.xlsx
#>     │   ├── metadata_AB.yml
#>     │   └── output
#>     │       ├── keyword_counts_AB.csv
#>     │       ├── resume_yourname_AB.txt
#>     │       └── skill_counts_posting_AB.csv
#>     └── log.rds

Step 5: Check application keywords

Having generated a resume, we can now analyze its contents to determine which data-related keywords we included and whether they align with the keywords in the job posting. This step performs three independent checks – identify general job-related keywords in the job posting, identify data science keywords in the job posting, identify data science keywords in the resume – followed by a keyword match of the data science terms from the job posting against those mentioned in the resume, either in the skills section or in the body text.

autocv::in_tmp_env({
  report_df <- autocv::check_skills()
  print(report_df)
})
#> 
#> ── Keyword check: Posting vs job terms list ────────────────────────────────────
#> ! File already exists: output/keyword_counts_AB.csv (skipping)
#>       term matches
#> 1 Disabled       4
#> 
#> ── Keyword check: Posting vs data terms list ───────────────────────────────────
#> ! File already exists: output/skill_counts_posting_AB.csv (skipping)
#>                           term matches
#> 1                 Data Science      73
#> 2                Data Analysis      15
#> 3                           UX       1
#> 4                   Statistics      29
#> 5              Cloud Computing       4
#> 6                     Insights      10
#> 7             Computer Science       7
#> 8                  Survey Data       1
#> 9                   Algorithms       4
#> 10           Unstructured Data       2
#> 11            Domain Knowledge       3
#> 12                 Mathematics       3
#> 13                 Data-Driven       3
#> 14       Statistical Knowledge       1
#> 15          Data Visualization       3
#> 16            Machine Learning       8
#> 17                           C       8
#> 18              Classification       3
#> 19                 Data Mining       4
#> 20        Statistical Learning       3
#> 21             Data Collection       1
#> 22                    Big Data      13
#> 23   Exploratory Data Analysis       2
#> 24             Data Management       2
#> 25               Data Cleaning       1
#> 26           Predictive Models       3
#> 27                      Models       2
#> 28       Data-Driven Decisions       1
#> 29        Statistical Analysis       2
#> 30          Data Preprocessing       1
#> 31         Feature Engineering       1
#> 32             Model Selection       1
#> 33              Implementation       1
#> 34       Analytical Techniques       1
#> 35                  Analytical       1
#> 36              Large Datasets       2
#> 37     Non-Technical Audiences       1
#> 38           Critical Thinking       1
#> 39 Data-Driven Decision-Making       1
#> 40              Data Analytics       1
#> 41              Cloud Services       1
#> 42                      Python       1
#> 43                           R       2
#> 44            Data Engineering       1
#> 45                        HTML       4
#> 46        Statistical Modeling       1
#> 47            Causal Inference       1
#> 48               Collaboration       1
#> 49           Signal Processing       1
#> 50                       Spark       1
#> 51                   Spark SQL       1
#> 52                 Data Ethics       1
#> 53                         ETL       1
#> 54                         ELT       1
#> 55                  Developers       1
#> 
#> ── Keyword check: Resume vs data terms list ────────────────────────────────────
#>  Writing file: output/skill_counts_resume_AB.csv
#> # A tibble: 16 × 2
#>    term               matches
#>    <chr>                <dbl>
#>  1 GitHub                   3
#>  2 Python                   2
#>  3 R                        2
#>  4 SQL                      2
#>  5 Machine Learning         1
#>  6 SciPy                    3
#>  7 Scikit-Learn             2
#>  8 TensorFlow               2
#>  9 DevOps                   1
#> 10 Git                      1
#> 11 GitHub Actions           2
#> 12 ML Pipelines             1
#> 13 Data Modeling            1
#> 14 CI/CD Pipelines          1
#> 15 Bayesian Inference       1
#> 16 Documentation            2
#> 
#> ── Keyword report: Resume vs posting keywords ──────────────────────────────────
#> Not in your skill set:
#> 1. Algorithms
#> 
#> 2. Analytical
#> 
#> 3. Analytical Techniques
#> 
#> 4. C
#> 
#> 5. Cloud Computing
#> 
#> 6. Cloud Services
#> 
#> 7. Collaboration
#> 
#> 8. Computer Science
#> 
#> 9. Data Analysis
#> 
#> 10. Data Analytics
#> 
#> 11. Data Cleaning
#> 
#> 12. Data Collection
#> 
#> 13. Data Engineering
#> 
#> 14. Data Ethics
#> 
#> 15. Data Mining
#> 
#> 16. Data Science
#> 
#> 17. Data-Driven
#> 
#> 18. Data-Driven Decision-Making
#> 
#> 19. Data-Driven Decisions
#> 
#> 20. Developers
#> 
#> 21. Domain Knowledge
#> 
#> 22. ELT
#> 
#> 23. Implementation
#> 
#> 24. Insights
#> 
#> 25. Large Datasets
#> 
#> 26. Mathematics
#> 
#> 27. Models
#> 
#> 28. Non-Technical Audiences
#> 
#> 29. Predictive Models
#> 
#> 30. Signal Processing
#> 
#> 31. Spark SQL
#> 
#> 32. Statistical Knowledge
#> 
#> 33. Survey Data
#> 
#> 34. Unstructured Data
#> 
#> 35. UX
#> 
#> 
#> 
#>  Writing file: output/skill_report_AB.csv
#> 
#> 
#> 
#> • 5.45% of the posting keywords are in your resume.
#>                           term count matches in_my_skill_set in_my_skill_list
#> 1             Machine Learning     8       1            TRUE            FALSE
#> 2                       Python     1       2            TRUE             TRUE
#> 3                            R     2       2            TRUE             TRUE
#> 4                 Data Science    73       0           FALSE            FALSE
#> 5                Data Analysis    15       0           FALSE            FALSE
#> 6                           UX     1       0           FALSE            FALSE
#> 7                   Statistics    29       0            TRUE            FALSE
#> 8              Cloud Computing     4       0           FALSE            FALSE
#> 9                     Insights    10       0           FALSE            FALSE
#> 10            Computer Science     7       0           FALSE            FALSE
#> 11                 Survey Data     1       0           FALSE            FALSE
#> 12                  Algorithms     4       0           FALSE            FALSE
#> 13           Unstructured Data     2       0           FALSE            FALSE
#> 14            Domain Knowledge     3       0           FALSE            FALSE
#> 15                 Mathematics     3       0           FALSE            FALSE
#> 16                 Data-Driven     3       0           FALSE            FALSE
#> 17       Statistical Knowledge     1       0           FALSE            FALSE
#> 18          Data Visualization     3       0            TRUE            FALSE
#> 19                           C     8       0           FALSE            FALSE
#> 20              Classification     3       0            TRUE            FALSE
#> 21                 Data Mining     4       0           FALSE            FALSE
#> 22        Statistical Learning     3       0            TRUE            FALSE
#> 23             Data Collection     1       0           FALSE            FALSE
#> 24                    Big Data    13       0            TRUE            FALSE
#> 25   Exploratory Data Analysis     2       0            TRUE            FALSE
#> 26             Data Management     2       0            TRUE            FALSE
#> 27               Data Cleaning     1       0           FALSE            FALSE
#> 28           Predictive Models     3       0           FALSE            FALSE
#> 29                      Models     2       0           FALSE            FALSE
#> 30       Data-Driven Decisions     1       0           FALSE            FALSE
#> 31        Statistical Analysis     2       0            TRUE            FALSE
#> 32          Data Preprocessing     1       0            TRUE            FALSE
#> 33         Feature Engineering     1       0            TRUE            FALSE
#> 34             Model Selection     1       0            TRUE            FALSE
#> 35              Implementation     1       0           FALSE            FALSE
#> 36       Analytical Techniques     1       0           FALSE            FALSE
#> 37                  Analytical     1       0           FALSE            FALSE
#> 38              Large Datasets     2       0           FALSE            FALSE
#> 39     Non-Technical Audiences     1       0           FALSE            FALSE
#> 40           Critical Thinking     1       0            TRUE            FALSE
#> 41 Data-Driven Decision-Making     1       0           FALSE            FALSE
#> 42              Data Analytics     1       0           FALSE            FALSE
#> 43              Cloud Services     1       0           FALSE            FALSE
#> 44            Data Engineering     1       0           FALSE            FALSE
#> 45                        HTML     4       0            TRUE            FALSE
#> 46        Statistical Modeling     1       0            TRUE            FALSE
#> 47            Causal Inference     1       0            TRUE            FALSE
#> 48               Collaboration     1       0           FALSE            FALSE
#> 49           Signal Processing     1       0           FALSE            FALSE
#> 50                       Spark     1       0            TRUE            FALSE
#> 51                   Spark SQL     1       0           FALSE            FALSE
#> 52                 Data Ethics     1       0           FALSE            FALSE
#> 53                         ETL     1       0            TRUE            FALSE
#> 54                         ELT     1       0           FALSE            FALSE
#> 55                  Developers     1       0           FALSE            FALSE
#> Current file tree:
#> /tmp/RtmpkTOdSk/./R/applications
#> └── 2024-07-data-science
#>     ├── 2024-09-28-01-company-name-data-scientist-AB
#>     │   ├── input
#>     │   │   ├── cover_data_AB.xlsx
#>     │   │   ├── posting_AB.txt
#>     │   │   ├── report.xlsx
#>     │   │   └── resume_data_AB.xlsx
#>     │   ├── metadata_AB.yml
#>     │   └── output
#>     │       ├── keyword_counts_AB.csv
#>     │       ├── resume_yourname_AB.txt
#>     │       ├── skill_counts_posting_AB.csv
#>     │       ├── skill_counts_resume_AB.csv
#>     │       └── skill_report_AB.csv
#>     └── log.rds

The following spreadsheets were generated by this step:

Step 6: Inspect application metadata

Every step so far has yielded metadata that was automatically stored in a log file, with each row corresponding to a unique job application. If at any point you wish to review the information for an existing job application, you can easily retrieve it from the log.

autocv::in_tmp_env({
  app_df <- autocv::get_app_info()
  print(app_df)
})
#>   id      company       position status date_applied days_since notes
#> 1 AB Company Name Data Scientist    ipr            /          /     /

Step 7: Update application metadata

At this point we can update the job application log with new information as desired, such as changes to the status of the application or a note about the job in question.

autocv::in_tmp_env({
  autocv::apply_to(app_id = "AB")
  autocv::update_app_info(app_id = "AB", notes = "Excited about this role")

  app_df <- autocv::get_app_info()
  print(app_df)
})
#> • Modifying field: status (ipr -> applied)
#> • Modifying field: date_applied (/ -> 2024-09-28)
#> 
#> ! File already exists: metadata_AB.yml (overwriting)
#> ! File already exists: ../log.rds (updating entry for 'AB')
#> • Modifying field: notes (/ -> Excited about this role)
#> 
#> ! File already exists: metadata_AB.yml (overwriting)
#> ! File already exists: ../log.rds (updating entry for 'AB')
#>   id      company       position  status date_applied days_since
#> 1 AB Company Name Data Scientist applied   2024-09-28          0
#>                  notes
#> 1 Excited about thi...

Review the generated documents

Let’s conclude by transferring some of documents generated within our temporary environment into long-term memory. Each autocv-generated application document is displayed below for reference.

autocv::in_tmp_env({
  source_paths <- as.character(autocv::get_app_info(
    id = "AB",
    field = c(
      "resume_plain_path",
      "keyword_counts_posting",
      "skill_counts_posting",
      "skill_counts_resume",
      "skill_report"
    )
  )[2:6])
  target_dir <- file.path(fs::path_wd(), "output")
  if (!dir.exists(target_dir)) { dir.create(target_dir) }
  target_paths <- file.path(target_dir, basename(source_paths))

  autocv::copy_files(
    source_paths = source_paths,
    target_paths = target_paths
  )
})
#> ! File already exists: output/resume_yourname_AB.txt (skipping)
#> ! File already exists: output/keyword_counts_AB.csv (skipping)
#> ! File already exists: output/skill_counts_posting_AB.csv (skipping)
#> ! File already exists: output/skill_counts_resume_AB.csv (skipping)
#> ! File already exists: output/skill_report_AB.csv (skipping)
#> Working directory output file tree:
#> /home/runner/work/autocv/autocv/vignettes/output
#> ├── cv_yourname.html
#> ├── keyword_counts_AB.csv
#> ├── resume_yourname_AB.pdf
#> ├── resume_yourname_AB.txt
#> ├── skill_counts_posting_AB.csv
#> ├── skill_counts_resume_AB.csv
#> └── skill_report_AB.csv
# NOTE: eval is off due to LaTeX rendering issues when running GitHub workflow
autocv::in_tmp_env({
  autocv::render_cv_as_html()
  autocv::render_resume()
})
autocv::in_tmp_env({
  source_paths <- c(
    as.character(autocv::get_app_info(id = "AB", field = "resume_path")[2]),
    file.path(autocv::get_path_to("output"), "cv_yourname.html")
  )
  target_dir <- file.path(fs::path_wd(), "output")
  if (!dir.exists(target_dir)) { dir.create(target_dir) }
  target_paths <- file.path(target_dir, basename(source_paths))

  autocv::copy_files(
    source_paths = source_paths,
    target_paths = target_paths
  )
})
filename <- "cv_yourname.html"
source_path <- file.path(
  autocv::get_path_to("notebooks"), "vignettes", "output", filename
)
target_path <- file.path(
  autocv::get_path_to("notebooks"), "docs", filename
)
file.copy(source_path, target_path, overwrite = TRUE)
#> [1] TRUE

The following documents were generated by autocv:


Unable to display PDF file. Download instead.



Unable to display HTML file. Download instead.



Unable to display text file. Download instead.


And there you have it! A few steps (“build, edit, render, check” and “inspect, update”), neatly encapsulated within autocv’s core user interface, are all you need to construct beautiful, tailored, easily tracked, resume documents and job applications.