Last updated 4 April 2021
daiR vignettes use deliberately simple examples involving uploads of pdf files straight into the root of the bucket and down again. In real life you may be dealing with slightly more complex scenarios.
Document AI accepts only PDFs, GIFs and TIFFs, but sometimes your source documents are in other formats.
daiR’s helper function
image_to_pdf() is designed to help with this. Based as it is on
imagemagick, it converts almost any image file format to pdf. You can also pass a vector of image files and ask for a single
To illustrate, we can take this image of an old text from the National Park Service Website:
<- file.path(tempdir(), "nps.jpg") dest_path download.file("https://www.nps.gov/articles/images/dec-of-sentiments-loc-copy.jpg", destfile = dest_path, mode = "wb")
And convert it to a pdf like so:
library(daiR) <- file.path(tempdir(), "nps.pdf") dest_path2 image_to_pdf(dest_path, dest_path2)
And the file is ready for processing with Document AI.
At other times you may want to have folders inside your bucket. A typical scenario is when your source documents are stored in a folder tree and you want to batch process everything without losing the original folder structure.
Problem is, it’s technically not possible to have folders in Google Storage; files in a bucket are kept side by side in a flat structure. We can, however, imitate a folder structure by adding prefixes with forward slashes to filenames. This is not complicated, but requires paying attention to filenames at the upload and download stage.
To illustrate, let’s create two folders in our working directory:
library(fs) <- file.path(tempdir(), "folder1") dir1 <- file.path(tempdir(), "folder2") dir2 dir_create(c(dir1, dir2))
Then we create three duplicates of the file
nps.pdf and put two pdfs in each folder.
<- file.path(dir1, "nps.pdf") dest_path3 <- file.path(dir1, "nps2.pdf") dest_path4 <- file.path(dir2, "nps3.pdf") dest_path5 <- file.path(dir2, "nps4.pdf") dest_path6 file_copy(dest_path2, dest_path3) file_copy(dest_path2, dest_path4) file_copy(dest_path2, dest_path5) file_copy(dest_path2, dest_path6)
To upload this entire structure to Google Storage, we create a vector of files in all subfolders with the parameter
recurse = TRUE in the
dir_ls() function. I’m assuming here that the working directory is otherwise empty of pdf files.
<- dir_ls(tempdir(), glob = "*.pdf", recurse = TRUE)pdfs
We then iterate the
gcs_upload() function over our vector:
library(googleCloudStorageR) library(purrr) <- map(pdfs, ~ gcs_upload(.x, name = .x))resp
If we now check the bucket contents, we see that the files are in their respective “folders”.
Bear in mind, though, that this is an optical illusion; the files are technically still on the same level. In reality, the
folder2/ elements are an integral part of the filenames.
We can process these files as they are with the following command:
<- dai_async(pdfs) resp
In which case DAI returns
.json files titled
folder1/<job_number>/0/nps-0.json and so forth. We can download these the usual way:
<- gcs_list_objects() content <- grep("*.json$", content$name, value = TRUE) jsons <- map(jsons, ~ gcs_get_object(.x, saveToDisk = file.path(tempdir(), .x)))resp
And the json files will be stored in their respective subfolders alongside the source pdfs.
Note, however, that this last script only worked because there already were folders titled
folder2 in our temporary directory. If there hadn’t been, R would have returned an error, because the
gcs_get_object() function cannot create new folders on your hard drive.
If you wanted to download the files to another folder where there wasn’t a corresponding folder tree to “receive” them, you would have to use a workaround such as changing the forward slash in the bucket filepaths for an underscore (or something else) as follows:
<- map(jsons, ~ gcs_get_object(.x, resp saveToDisk = file.path(tempdir(), "folder3", gsub("/", "_", .x))