Australian Toolbook User Group
Note Toolbook 6 now has "import text" command on the Insert menu rather than the Tools menu. This command imports raw text files into a specific toolbook field. For more complicated text importing, you may need to do it manually using the copy/paste clipboard manouvers, or write some script.
Importing Large RTF Text Files Solution by Asymetrix Tech Support
Question: is there any possibility of importing a RTF-file of about 300k into ToolBook?
Answer: As soon as you start chunking a rtf file you will have problems since the main header on the first chunk is not present on the additional chunks, thus the additional pieces are incomplete. I know of no practical way to chunking an rtf file so that the problem you are encountering will not occur. A regular text file will not be a problem.
Reading Lots of Text into Tbk
Question: How are developers getting text into books without having to type it into each field? Can TB recognize bookmarks or special characters in order to put different sections of text (within one document) into different TB fields?
If the text is contained in a textfile (or rtf file), it can be read in (up to 64k chunks at a time) and be place into field(s) (up to 32k chunks per field). Continued next page
from previous page The problem you mention above is that you want to be able to detect the presence of some type of character (bookmark). This is not directly supported. It can be done, however, by checking (parsing) the text looking for that special character (a marker) and then performing some action (like putting the next bit of text into another field until an ending marker is hit).
The document is a Word 6.0 rtf file. We use Word to storyboard the lessons. A macro exports all the instructional text to rtf files.
On each Toolbook page I have two text fields for bulleted text and prompt text. I want to be able to define a directory, file name, and bookmark within the page script (or field script) and have that specific chunk of text displayed in the appropriate field.
Then, I go to my next Toolbook page and define the same directory & file but use a different bookmark to pull in the next chunk of text within the rtf file. Currently, I have a different rtf file for each chunk of text. I was hoping there would be a less cumbersome approach. There's no activating click to display the text.
When the user turns to the Toolbook page I just want the text to be displayed (On OpenPage??). Just like an external call to a graphic file. This project involves multiple designers, many Toolbook books & pages, many chunks of text. My goal is to have designers only type these chunks of text once and let programming do the rest.
My recommendation is to put meaningful markers in the text that allow for ease of parsing. So, the data can be read into a buffer in 32-64k chunks, parsed (based on marker or tags, and put into appropriate fields with specific properties (where some text can be hotwords). It is not a trivial task but doable.
Yes, I agree with Jim's last statement there - "not trivial..but doable." I've implemented it in a program I wrote. Here's the routine I built. The problem with hotwords is that it is difficult to manage their scripts which is effectively how the hyperlink works. My solution was to write a handler at the page level and test for a hotword click. Then I would get the name of the hotword and make the link.
I used $ and ^ to mark the beginning and end of hotwords. You could use only a start marker if your hotwords are all just 1 word. Otherwise you'll need an ending character as well. I then loaded the marked text into a hidden field, parsed it and then copied it into the destination field.
I'm sure that I could find quicker ways to write this code now but don't have the need/time/desire to do so. A pointer - put all your text into a variable and parse from within the variable.
.... step i from 1 to textlineCount(text of \ recordField "list" of page (pageNo)) textline (i) of text of field "hideTxt" \ of self = \ textline (i) of text of recordField "list" \ of page (pageNo) -- SEARCH THE TEXTLINE FOR TAGGED WORDS; -- REMOVE TAGS & COLORIZE marker = offset("$", textline i of text of \ field "hideTxt") if marker > 0 do set link to null marker2 = offset("^", textline i \ of text of field \ "hideTxt" of self) link = chars (marker+1) to \ (marker2-1) of textline i \ of text of field "hideTxt" of self select chars (marker) to (marker2)\ of textline (i) of \ text of field "hideTxt" of self put link into selectedText select chars (marker) to (marker2 \ - 2) of textline i of \ text of field "hideTxt" of self strokeColor of selectedText = \ hotwordColor of this book marker = offset("$", textline i of \ text of field "hideTxt") until marker = 0 end end sysLockScreen = true richText of field "soList" of self = richText \ of field "hideTxt" of self hide field "hideTxt" of self ....
Importing Text Data into Toolbook Solution by William McKee
Having just completed an application which imports large text files (i.e., greater than 32Kb, which is the fieldsize limit under Windows 3.1), I volunteered to report the results of my efforts for the newsletter. During my struggles I discovered ways to determine the end-of-line character (EOL), read the data file line-by-line, locate matching patterns in the text file being imported, and gracefully inform the user when an unexpected event occurred. Before going into coding examples, let's take a look at the various types and meanings of EOL characters. Being Windows developers, you are all probably familiar with the
CRLF
code. Of course, CR stands for carriage return and LF for line feed. The application I was developing needed to be able to handle files from multiple platforms (a real concern with the increasing interconnectivity of computers).
For example, the Macintosh uses only a CR for the EOL character. Unix files use only LF.
I discovered that the best way to read in a text file of unknown size and from an unknown source is line-by-line. This method will allow your application to determine whether you are using the proper EOL character and will also give you flexibility in handling files >32Kb. The first thing you will want to do upon reading in a new text file is to check that you are using the proper EOL character. I have setup a "eolChar" user property in my book which is the ASCII number of the corresponding code. The "importFile" variable is a text file on disk which you will need to assign somehow. I use the TB30DLG.DLL to bring up the Windows open file dialog box.
to handle importData <retrieve path & name of the importFile> openFile(importFile) sysSuspend = false -- SKIP OVER ANY NULL LINES AT THE BEGINNING -- OF THE DATA FILE do sysErrorNumber = 0 readFile(importFile) to \ ansiToChar(eolChar of this book) until It <> NULL -- TEST THE FIRST LINE sysSuspend = true if sysErrorNumber <> 0 <error handling routines; these could automatically switch to a different EOL and try the import again> else txt = It -- place the first line -- of the data file into a -- variable for later use if charCount(txt) > 80 <error handling> end txt = addEOLChar (txt) end -- IF ALL IS WELL THEN IMPORT THE COMPLETE FILE sysError = NULL do readFile(importFile) to \ ansiToChar(eolChar of this book) tmp = It tmp = addEOLChar (tmp) put tmp after txt until sysError = "end of file" closeFile(importFile) <additional processing of the "txt" variable> end importData -- ADD PROPER WINDOWS EOL CHARACTER(S) to get addEOLChar txt if charToAnsi(last char of It) = 13 put LF after last char of txt else put CRLF after last char of txt end return txt end addEOLChar
The additional processing of the "txt" variable is a more important function than it may appear. This could be the place where you simply dump the data into a text or record field. If it is greater than 32Kb, then you will need to do some creative coding during the importing of the text file as I had to do with my application. For instance, if the data in the text file is in a patterned form, then you could place chunks of the file into an array or individual fields. If the text file is not standardized, then you could read in the data in chunks of 400 lines. These 400-line chunks will certainly be less than 32Kb assuming 80 characters per line which is what I have assumed in the code above.
Further Information: See chapter 16 of the Toolbook User Manual; a more in-depth example of these procedures can be found in a sample application located on my website (http://www.unc.edu/~slim/)
To access thousands more tips offline - download Toolbook Knowledge Nuggets