Components (WCMS Part 3)

In this post we'll look at the Component Manager, one part of the WCMS, the Webcrossing Customization Management Suite.

A "component" is a large block of page functionality. In this case, it refers to:
  • Top Level View (how the top level of the site looks)
  • Folder View (how items listed in a folder look)
  • Message View (how messages look listed in a discussion
  • Toolbar (how the list of buttons or links looks at the bottom of the page
There are between 2 and 5 options for each of these components, and you must choose one of each. Like everything else in Webcrossing, components are hierarchical. You can use the Tabular Folder View for one folder and the Sortable Folder View for another. First, choose your default Components using the Component Manager in the Control Panel. Then, within each subfolder where you want something other than the default, use the folder level Component Manager (reachable via the Edit Folder page) to choose a different Component. There are some restrictions on putting non "Classic" views inside Classic views, but generally you can arrange things in a nested fashion if necessary.

Most Components have an extensive page of settings you can reach from the Component Manager pulldown menu once you have it enabled.



The Component Manager allows for a lot of customization "bang" for very little effort. Next time, Folder and User Items.

Themes (WCMS Part 2)

Themes are just one part of the entire Webcrossing Customization Management Suite, the WCMS. A half dozen or so themes come with the install package, or you can create your own. Themes are a combination of layout and look/feel settings, and can be applied to the entire site, or to just one or more folder hierarchies, allowing for different areas of your site to look different.



Themes can be applied in two ways: choose one from the "Choose Top-Level Theme" page, or import one. Once a theme is applied, you can change any of the hundreds of theme settings to customize it.

Since themes can be exported or imported, it makes it easy to develop a custom theme on your dev server and then move it to production. You can also export a theme before you start fiddling with it, so if you want to just revert to what you had before, you can easily do that.

Theme settings are hierarchical. If you want your subfolders to be identical except for, say, the color of the text in the banner, you can easily do that. If you want your subfolders to look like entirely different sites, you can do that too. Find the "Choose Folder-Level Theme" from the Theme Manager on the Edit Folder page.

Themes can be either "inside" (the historical banner and footer) or "outside" the banner/footer, or off entirely - which provides a great deal of flexibility.



On the main "Edit Theme Settings" page for either a folder or the entire site, you'll find:
  1. A small mini preview of what your theme looks like
  2. Global settings like font size and face, any HTML to go inside the <head> tags, the global stylesheet, date formatting, buttons and icons, and some basic global colors
  3. Layout settings, which pull together all the layout settings from the other various panels
  4. Color settings, which pulls together all the color settings from the other various panels and some built-in plugins
  5. A page each for the banner, the nav, center, and right columns, and the footer. Each of these pages allows you to set dimensions, colors, etc. as well as determine what "widgets" - for lack of a better word - should be put where. "Widgets" can be:
    • Admin-defined HTML fragments, which can contain WCTL scripting
    • Content blobs from the Content Management plugin
    • Content provided by various plugins, like sidebar polls
    • Other interface widgets provided by the WCMS system such as login/logout links, the time and date, a simple round-robin image rotator, and the like
  6. A page of settings for the WCMS interface widgets
  7. Links to export and import your theme
Themes may not be the best way to go if you have to match an example site exactly to provide a seamless experience for your users going to and from the Webcrossing portion of your site, but if that's not a requirement and you don't have a design team and aren't in a position to do any scripted customization, themes are going to be a great deal of help.

Environment and financial benefits

My company, Direct Learn Services, has used Webcrossing Community for about eight years now. We use it to run short, time limited online conferences – typically, they last around four days. These are hugely popular, and I’ll describe in another post a little about how they work. But in this post, I want to illustrate a couple of points about the financial and environmental benefits of using online conferencing systems. The figures I’ll use aren’t mine – they come from an Athabasca University study, published in the Canadian Journal of Learning and Technology, Spring 2009. The paper can be seen at http://www.cjlt.ca/index.php/cjlt/article/view/521/254.

The study looked at our 2008 Supporting Deaf People online conference. This is totally online, and had 241 participants from 18 countries – a truly international event, running 24 hours a day. The cost to delegates was £50 each (at the rates at the time of the study, about 69 USD). The study examined the carbon footprint and costs which would have been generated by an equivalent physical conference.

Environmental costs

I don’t want to go into the detail of how these were calculated – those interested can refer to the paper linked above. The paper assumed that if held as a physical event, the conference would be in London (we are a UK-based company). So, calculating emission costs for airplane travel for non-UK delegates, travel within the UK for UK delegates, and hotel emissions, the total CO2 emissions would be 431.09 metric tons – or around 1.79 metric tons per participant. (The equivalent in US tons is around 475 and 1.97). So – given that emissions from attending via your computer at home or your normal workplace are very minimal indeed - this represents a massive saving. To put it into context, the average annual emissions per capita in 2005 for a US citizen were 19.52 – so the conference would represent a large proportion of that. In fact, for less developed countries, the proportion is much higher – Brazil’s per capita emissions were 1.76 metric tons – less than emitted for the attendance at a four day conference held in London!

Financial costs

The financial cost, averaged out per delegate, was calculated as 2168 USD – this was primarily travel and accommodation. This compares with the actual cost to delegates of 69 USD – another huge saving.

Was the experience as good?

It could be argued that whilst the physical event costs more, delegates would get a lot more from it. In fact, we don’t agree with this, and I will, in another post, demonstrate that levels of participation in our online conferences, and satisfaction with them, are generally higher than in physical conferences (though, of course, you don’t get the chance to spend a few days being a tourist in London!).

Conclusion

Of course, it’s not quite that simple. The reality is that had we decided to run this conference physically, it would never have happened. It would have been way too expensive in terms of both time and money. Our delegates are not highly paid businessmen used to jetting around the world. They are ordinary people, not paid massive amounts of money, and not able to afford to pay nearly 2200 USD for a short conference. And this brings us to the real benefit of online conferencing – it enables people who could not go to a physical conference to attend online and to get first class professional development, at a minimal cost to themselves and to the environment. It gives them opportunities they would not otherwise have had.

Customization without Scripting (WCMS Part 1)

Since version 5, Webcrossing has had a suite of tools you can use to customize your site without scripting. Collectively these tools are called the WCMS, which is an acronym for Webcrossing Customization Management Suite.



The WCMS allows you to:
  • Choose and customize a theme for your site, or build your own theme.
    • You can customize the layout, including turning on and off left and right columns and setting their widths.
    • You can place widgets or other content in any of 6 blocks on the page.
    • You can customize the theme colors, which carry through the entire site.
  • Choose the particular folder layout, message layout, and toolbar layout you want.
  • Turn on "Folder Item" plugins, which are things that live in a folder.  For example, calendars or blogs.
  • Turn on "User Item" plugins, which provide extra functionality to an individual user.  For example, the Message Center to help people track new messages and bookmarks.
  • Turn on Extension plugins, which provide extra functionality that doesn't fall under any of the previously-mentioned umbrellas. For example, Register Plus for enhanced profiles.
  • Access various localizations tools which allow you to either translate the interface into another language or simply change some terminology.  Like perhaps you prefer to use "topics" to the standard word, "discussions."
  • Access some optional developer tools.
  • Set WCMS permissions for host-level administrators

We have said that WCMS doesn't require any scripting, and that is true. But it doesn't preclude scripting. Administrators can use WCTL in most WCMS fields and control who else has that privilege.

The WCMS is a powerful suite of tools for the non-developer to use to customize their site.  We will talk in more detail about each of these in future posts.

Passing Variables to Commands

One of the beauties of working with WebCrossing is that it handles a lot of the backend integration for you. One of those is the ability to pass "command line" or URL parameters to any scripting method that you create in either WCTL or SSJS. In order to continue with the authentication series, we must first give a quick lesson on how to pass information through URL's into your methods.

WebCrossing URL structure is varied due to some legacy issues in the code. I will start with the older methods of sending information into the methods and move up from there. The Original URL structure is like this:

http://yoursite.com/webx?CC@xxxxx@location!key1=val1&key2=val2

The CC is the command code or the name of your macro/command that you create in the scripting files.

The xxxxxx is the "certificate" which lets WebCrossing track users across various pages without the need for cookies.

location is a unique ID of some "node" in the database which corresponds usually to a folder, discussion, or message.

The key1=val1 are & separated key value pairs of information that your macro/command can read and use for whatever you want.

The key value pairs are separated from the rest of the URL by the use of an exclamation point ( ! ).

Lets say that you wanted to display your own simple calendar and wanted your URL structure to have the month and year passed into the calendar so that a specific month/year can be displayed to the user. Your URL might be something like this:

http://yoursite.com/webx?my_cal@@!month=1&year=2011

This would mean that 2 variables would be available in the macro "my_cal". Your macro then might look like this to process these types of URL's in WCTL:

%% macro my_cal %%
%% set month form.month %%
%% set year form.year %%
%% // use month and year to display a calendar %%
%% //… left as an exercise for the reader :-) %%
%% endmacro %%

The key value pairs are passed in through an object called "form" where the dotted property is the same as the key in the URL. These are all strings when coming into your macro, so appropriate conversions may be necessary. These values are also URL encoded, so it may also be necessary to decode them to convert %20's to spaces, etc.

This same snippet of code to process the key value pairs in SSJS would look like this:

%% command my_cal(){


   month = form.month - 0;
   year = form.year - 0;
   // use month and year to display a calendar
   // etc.


} %%

You might ask why I subtracted 0 from the form.* variables. In normal Javascript, if you subtract a 0 from a string, then it will convert the value of the string into an integer type without modifying what the value was. For instance, if form.month held the string "11", then subtracting a 0 from that would convert that value into the numeric value 11 so that subsequent calculations could be done on the number.

The "New" URL structure

One of the more recent URL structures in WebCrossing allows a more SEO friendly representation of the commands and locations. This structure is laid out like this:

webx/location/macroOrCmd/cert.certificate/name.value[/name.value.. .]

The example URL above would be represented by this:

http://yoursite.com/webx/my_cal/month.1/year.2011

Note that the certificate can be eliminated completely as long as cookies are available to track the user session instead. This query-string-less version of the WebCrossing URL is much more friendly to search engines and humans alike.

Now that you know a little about the URL structure and passing information to your macros that you write, we can continue along with the authentication filters.
Dave Jones
dave@lockersoft.com
http://www.youtube.com/lockersoft

Calling and Registering Filters

So far when discussing filters, we have referred to them by name; messageAddFilterPre gets called for every submitted add-message form, before the form is processed; discussionAddFilterPost is called for every submitted add-discussion form, after the form is processed; emailFilterIndividual is called whenever a single-item email notification is ready to send; and so on.


But it is not necessary for your filters to conform to this naming convention. You can have muliple filters for each filter point in the request life cycle, and you can control the order in which they execute.


In fact it's best practice not to use the default names, especially in highly-customized systems because you run the risk of name collisions. If there are two filters (or any two routines) of the same name, only one of them will load and execute. That's the kind of bug that can drive you crazy because it won't generate an error; your code just won't fire.


You can identify a WCTL macro or WCJS command by any name as a particular filter using the following WCTL directive:


%% filterName.filterRegister(macroName) %%

where filterName is a variable or string literal containing the filter's default name and macroName is a variable or string literal containing your macro or command name. So


%% "messageAddFilterPost".filterRegister("myCustomFilter") %%

will cause the routine myCustomFilter to fire after processing for every submitted add-message form.

Filters execute in the order they are registered. If there is a filter with the default name, it always executes first. There are also commands to deregister a filter, display the entire filter chain, or clear the chain entirely.

It is also possible to add additional filters to any filtered form submission by using a special hidden input field. If you customize the add discussion form with


<input type=hidden name="filter" value="accounting">


then the system will look for filters named discussionAddFilterPre_accounting and discussionAddFilterPost_accounting and, if found, execute them at the appropriate points in the request life cycle after the default and any registered filters.

The combination of registered filters, input form filters, localized filters via the template envelope for a given location (which we have not touched on here) makes it possible to add specific, highly targeted functionality as needed for new/customized objects. 

Webcrossing Neighbors: full-featured social networking [Part 3]

We've covered Webcrossing Core and Community. Now it's time to examine Neighbors.

Neighbors is also underpinned with Core. While retaining all the basic community functions like the ability to have discussions, Neighbors adds a cornucopia of social networking features. Customizable profiles with granular privacy controls, the ability to track first-, second-, and third-degree friends, the capability to create and join topic groups, an enhanced live chat and IM feature, even the concepts of networks.

There are three fundamental "places" you might go in a Neighbors site:
  1. Your own space. You can create a blog, photos albums, and more. You'll see a display of the most recent activity in your space and what all your friends and groups are up to, making it a sort of Grand Central Dashboard of what is going on with the people and groups you are interested in.
  2. Someone else's space. Here, you see how you are connected, who their friends are, what groups they are members of, and a display of the most recent activity there.
  3. A topic group. Here you'll usually find discussions and other interactive tools like file sharing and blogs, although the exact tools available within each group are determined by the group administrator.

Compared with Community, Neighbors is more organized initially, giving site owners a chance to dive right in and start using the site without having to make decisions about how to organize things. This, of course, means Neighbors is less flexible than Community. Neighbors hides a lot of under-the-hood features which most system administrators don't need (although there are dozens of switches to turn various features on and off). Sites which don't need the extreme flexibility of Community will appreciate getting a working, out-of-the-box social network with a smaller learning curve.

And, of course, like everything Webcrossing, it is scriptable if it doesn't meet all your needs as it is.

If you need social networking with all the bells and whistles and you like the idea of a pre-existing structure, Neighbors is for you.

Webcrossing Community: flexible mix-and-match features [Part 2]

Last time we looked at Webcrossing Core, the engine that runs everything. This time, we'll look at Webcrossing Community.

As its name implies, Community is designed fundamentally to be used as a web forum, although beyond that there are a million directions you can go.

Community starts with Core, which provides all the fundamental functions like users and access controls and all the various included servers. On top of that are layered scripts to provide a plugin architecture, apply editable CSS automatically as settings are changed, produce the WCMS (Webcrossing Customization Management Suite, a method of customizing without scripting), and allow for translation or just rewording of the User Interface.

Community is fundamentally flexible. You can have an unlimited folder hierarchy. Want to have all your discussions at the top level of your site and no folder hierarchy whatsoever? You can do that. Want to carefully organize your site so every discussion is neatly filed into a folder > subfolder > subfolder > subfolder > subfolder hierarchy? You can do that too. Neither of those may pass the "how to best organize your site" test, but the point is you are not limited by some arbitrary idea of how a site should be organized. It is entirely up to you.

Because Community has a plugin architecture, you only have to install the features that you really want. Install Register Plus to get enhanced user profiles, support for COPPA, and to put a copy of the Terms of Service on the registration page. Install Time Since to see "posted 3 minutes ago" rather than "posted 11 January 2015, 8:45 am." Install a WYSIWYG editor. Install calendars. Or any of 70 other plugins. And they're easy to install. Just click the link to "shop for plugins," find the ones you want, and they are downloaded and installed automatically. Then you decide where to turn them on.

And of course, everything is scriptable.

With Community, you can build whatever type of web forum you want. If none of the plugins do exactly what you need, your new feature can be scripted. It really is almost infinitely flexible. If you need that flexibility and don't need the full-on social networking capability of Neighbors, Community is for you.

Webcrossing Core: start here to create an interactive site [Part 1]

There are three Webcrossing products: Core, Community, and Neighbors. In this series, we'll examine each one individually and then compare them.

Webcrossing Core is the engine that runs everything else. It is the underpinning to Community and Neighbors, and it can serve as a dandy web application development platform all by itself.

Core comes with built-in HTTP, SMTP, POP, FTP, NNTP and chat servers. It can serve ordinary web sites - or extraordinarily complex web sites - easily. It comes with a built-in NoSQL database, and all of this is installed with a single installation script you can run in 5 minutes.

You save a lot of development time starting with Core. Consider that there are two built-in scripting languages, a built-in user database, access controls including administrators and moderated users, user groups, and the concept of content objects "owned" by some author. You can create and prototype your own object types.

Why reinvent the wheel? Core is a great - and largely unknown - general web development platform, especially for interactive sites.

Writing client-side JavaScript inside server-side JavaScript without losing your mind

When most people think of JavaScript, they think of client-side JavaScript.  However, JavaScript is also used as a scripting language in Webcrossing.  Here are a few tips for staying sane while using client-side JavaScript inside a server-side JavaScript environment.

Keep your client-side and your server-side code straight.  Be careful to not get your client side variables or functions mixed up with your server-side functions.  It seems like a forehead-slapping no-brainer, but we've all done it, often accompanied by a bit of head-against-brick-wall banging before the light dawns.

Writing client-side JavaScript inside server-side JavaScript can be tedious, especially if it is halfway complex, because server-side JavaScript requires that client-side JavaScript be quoted to get it to appear on the page as a literal. For best result, you'll want to write the client-side JavaScript first, by itself. Then put it into the server-side code and quote it (escaping what you need to) appropriately.  In my experience, even that sometimes produces four letter words, usually when server-side variables values needs to be inserted into the client-side code.

Debugging the client-side code later is much simpler if you also insert line breaks into the SSJS code using \r\n at the ends of the client-side function lines:

bb += 'function myFunction() {\r\n';
bb += 'var somevar = 1;\r\n';
bb += 'return somevar;\r\n';
bb += '}\r\n';


Without this, it all runs together on one line and browser debug utilities won't be much help telling you what line the problem is on.

When you can, break out your client-side JavaScript into a separate file.  Whatever is in a separate file won't need to be quoted. Duh :)

However, sometimes this is impossible because you need to set client-side JavaScript variables based on server-side values that are only known on the page being constructed.  For example, your client-side code might need to know the username and for whatever reason, you can't just send it as a parameter.  In this case, there are a couple of different things you can do.  One thing is to put a small client-side function on your page like this (this is server-side code):

bb += 'function getUserName() { return ' + user.userName + '; }';

Then in your client-side JavaScript in your separate file, all you need to do is call your getUserName function to retrieve the name.  In some cases you can also do something similar using hidden form fields (again, server side code):

bb += '<input type="hidden" name="userName" value="' + user.userName + '">';

Another thing you can do is use the special %% syntax in SSJS whereby anything inside pairs of %% gets delivered immediately to the response buffer. However, that only works if you don't have any server-side variables inside your client-side script. If you don't, you might as well put it in a separate file. And it also only works if you are not concatenating your response into a ByteBuffer and returning it all at the end of your function.

Hopefully these tips will keep your brain from becoming scrambled while using JavaScript in two simultaneous contexts.

Passing Variables to Commands

One of the beauties of working with WebCrossing is that it handles a lot of the backend integration for you. One of those is the ability to pass "command line" or URL parameters to any scripting method that you create in either WCTL or SSJS. In order to continue with the authentication series, we must first give a quick lesson on how to pass information through URL's into your methods.

WebCrossing URL structure is varied due to some legacy issues in the code. I will start with the older methods of sending information into the methods and move up from there. The Original URL structure is like this:

http://yoursite.com/webx?CC@xxxxx@location!key1=val1&key2=val2

The CC is the command code or the name of your macro/command that you create in the scripting files.
The xxxxxx is the "certificate" which lets WebCrossing track users across various pages without the need for cookies.
location is a unique ID of some "node" in the database which corresponds usually to a folder, discussion, or message.
The key1=val1 are & separated key value pairs of information that your macro/command can read and use for whatever you want.

The key value pairs are separated from the rest of the URL by the use of an exclamation point ( ! ).

Lets say that you wanted to display your own simple calendar and wanted your URL structure to have the month and year passed into the calendar so that a specific month/year can be displayed to the user. Your URL might be something like this:

http://yoursite.com/webx?my_cal@@!month=1&year=2011

This would mean that 2 variables would be available in the macro "my_cal". Your macro then might look like this to process these types of URL's in WCTL:

%% macro my_cal %%
%% set month form.month %%
%% set year form.year %%
%% // use month and year to display a calendar %%
%% //… left as an exercise for the reader :-) %%
%% endmacro %%

The key value pairs are passed in through an object called "form" where the dotted property is the same as the key in the URL. These are all strings when coming into your macro, so appropriate conversions may be necessary. These values are also URL encoded, so it may also be necessary to decode them to convert %20's to spaces, etc.

This same snippet of code to process the key value pairs in SSJS would look like this:

%% command my_cal(){
month = form.month - 0;
year = form.year - 0;
// use month and year to display a calendar
// etc.

} %%

You might ask why I subtracted 0 from the form.* variables. In normal Javascript, if you subtract a 0 from a string, then it will convert the value of the string into an integer type without modifying what the value was. For instance, if form.month held the string "11", then subtracting a 0 from that would convert that value into the numeric value 11 so that subsequent calculations could be done on the number.

The "New" URL structure
One of the more recent URL structures in WebCrossing allows a more SEO friendly representation of the commands and locations. This structure is laid out like this:

webx/location/macroOrCmd/cert.certificate/name.value[/name.value.. .]

The example URL above would be represented by this:

http://yoursite.com/webx/my_cal/month.1/year.2011

Note that the certificate can be eliminated completely as long as cookies are available to track the user session instead. This query-string-less version of the WebCrossing URL is much more friendly to search engines and humans alike.

Now that you know a little about the URL structure and passing information to your macros that you write, we can continue along with the authentication filters.
Dave Jones
dave@lockersoft.com
http://www.youtube.com/lockersoft

Filters, part 3: When is a filter not a filter?

The original reason filters were given that name is that they filter the response to an HTTP request. Later as other protocols were built in to the Webcrossing server - email, NNTP, etc., filters were added to provide equivalent or appropriate functionality for them. Thus we have incoming and outgoing email filters, NNTP authentication filters, and so on.

But there is one class of filter that is arguably misnamed, because they run in the background. They are triggered by certain actions and so they give you hooks into various convenient parts of the event loop, but they cannot affect the output from the action. These filters are

  • postCreate
  • postRename
  • postEdit
  • preDestroy
  • postUpload

The first four are triggered by (as the name suggests) the creation, renaming, editing, or deleting of any database node, no matter what type it is (folder, discussion, web file, or a custom Stored object) and no matter how the action is triggered (via the web interface, or an email or nntp request, or programmatically via a WCTL or SSJS script). So this gives you enormous flexibility in tying customized actions to database changes. It also means, of course, that you have to carefully define in your code which types of nodes you want to be affected and (if you're coding in SSJS) be careful to only use methods or properties available to that type of node. If you make a mistake in that respect, the code will bomb at that point and since it's running in the background the only sign of it will be a logJSError stack trace file.

postUpload unlike the others pertains to only one type of node, files. It fires whenever a new file is uploaded via the web interface, FTP, or WebDAV. It does NOT run when a file is created programmatically.

On entry, the current location is that of the triggering node. (That's why preDestroy, unlike the others, runs just before the action rather than just after it, so that the about-to-be-deleted location is still available.)

Final caveat: because all of these run in the background, there may be a slight delay between the triggering event and the execution of your code. In human turns it's generally imperceptible but you can't rely on the order of events.

Filters, part 2: Email Filters

As you may know, Webcrossing users can subscribe to individual discussions or entire folders. They can read their subscriptions either through the web interface or via email. As an admin you can configure the system to send individual email notifications for each new posting or to accumulate and send digest emails at intervals. Individual notifications can contain the full text of the posting or only a link back to the conference. Whatever options you enable, you can also allow your users to choose their preferred method.

That's a lot of flexibility, but there is a class of filters that can give you complete control over the email notification system (and indeed over all email processing). These filters are SSJS only; unlike many other filters they cannot be written in WCTL. (Although even that is not an absolute prohibition; see Sue's post about mixed-mode coding.)

The filters are:

emailFilterNotify fires whenever an individual email notification is ready for delivery. Inside this filter you have access to three global variables:

  • filterMessagePath (read-only) is the storedUniqueId of the item which the notification is for
  • filterMessageFull (read/write) is the complete text of the full-message notification
  • filterMessageUrl (r/w) is the complete text of the linkback notification
By altering the two writeable globals you can change the appearance and content of the notification. However, there are several things you can't do with this filter. For example, you cannot condition your changes on which user is receiving the notification, nor can you alter the recipient list, which form of the notification the user is receiving, and so on. For that, you want to use the more powerful emailFilterIndividual.

emailFilterIndividual has a more extensive set of globals:

  • filterMessageFrom (r/w) is the bounce address for the message. By default it is set to bounce.userId.msgCounter@firstDomain where firstDomain is the first entry in the list of email domains available for the message location.
  • filterMessagePath (r-o) is, as above, the id of the item
  • filterMessageFull (r-o) is the normal full-message notification
  • filterMessageUrl (r-o) is the normal linkback notification
  • filterMessageFullPlain (r-o) is the normal full-message notification converted to a plaintext format
  • filterMessageUser (r-o) is the userId of the recipient
  • filterMessageRcpt (r/w) is the recipient email address
  • filterMessageIndividual (r/w) is the content that will ultimately be sent. It is empty when the filter is called; if it is nonempty when the filter exits then it is what will be sent instead of the default format
  • filterMessageSuppress (r/w) is a flag to prevent the message from being sent. It is empty when the filter is called; if it is nonempty when the filter exits then no notification will be sent
Note that rather than editing the filterMessageFull or filterMessageUrl globals directly, as in emailFilterNotify, in emailFilterIndividual you can read from them but you put your desired alterations into filterMessageIndividual. Here is a simple example to illustrate:

%% function emailFilterIndividual() {
       filterMessageIndividual.append( filterMessageFull ).crlf().crlf();
       filterMessageIndividual.append( 'This is an extra line.' ).crlf();
} %%

This trivial example takes the default full-text notification and adds some text to the end of it. That is what the user will receive, overriding whatever preferences they have set for email format.

Some other noteworthy points:

All the writeable globals are ByteBuffers. A ByteBuffer is a stringlike object but it comes with a suite of helper methods that make it easy to compare, parse, and convert text.

The filterMessageFull, filterMessageUrl, and filterMessageFullPlain globals are actually in MIME format. That is, they contain all the message headers followed by two crlf pairs, followed by the message content. You can use the built-in Mime object to parse and manipulate this, and even create multipart messages containing inline attachments or other non-text content.


Thirdly, there is also emailFilterDigest. As its name suggests, it fires whenever a digest-style email notification is ready for delivery. I won't discuss it in detail now, just to keep this post a manageable length. But like emailFilterIndividual it gives you complete control over the content of the outgoing email, its recipients, or even whether to send it at all.

Getting the intermix just right (mixing SSJS and WCTL)

It's possible to call WCTL macros from a page being constructed in SSJS, and vice-versa. Here's how, along with a few tips to avoid the potholes on your journey:

Calling WCTL from SSJS:
  1. To call a WCTL macro from SSJS, use the wctlEval syntax:

    bb += wctlEval( 'use myWCTLMacroName' );

  2. Remember that you can't send function parameters in WCTL, so you may need to set a WCTL variable before calling your WCTL function:

    setWctlVar( 'myWctlVarName', value );

  3. Because WCTL has only one variable type, strings, make sure that the value in your setWctlVar() statement is a string.  WCTL can do integer math on strings, so don't worry that point if your WCTL macro needs to do any simple calculations.

  4. You can also get a WCTL variable value from JS using wctlVar:

    wctlVar( 'myWCTLVarName' );

  5. If, for some reason, you need to evaluate a chunk of WCTL which includes %%'s, you can use an alternative wctlEvalTemplate syntax:

    wctlEvalTemplate( "%" + "% if red %" + "%" + "red" + "%" + "% endif %" + "%" );

    (In other words, evaluate: %% if red %%red%% endif %%) As you can see, the syntax is a bit ridiculous because %% has a specific meaning within SSJS, so you have to divide up the individual %'s in order to get it to work.  This isn't used much because it's usually easier to just make it a macro and call with wctlEval.

Calling SSJS from WCTL:
  1. To call an arbitrary SSJS expression from WCTL, us the jsEval syntax:

    %% "if ( user ) { + 'Hello ' + user.userName; }".jsEval %%

  2. But if you want to specifically call an SSJS command or function, it's more efficient to use the jsCall syntax:

    %% "mySSJSCommandOrFunctionName".jsCall( "param1", param2 ) %%

    You can send as many parameters as you need to.  Parameters can be either WCTL variables or literals.

  3. If your SSJS is a command and doesn't have any parameters (this won't work with functions), you can alternatively call your SSJS like you would any WCTL macro:

    %% use mySSJSCommandName %%

  4. If your SSJS function or command is returning a numeric value, convert that number to a string in the SSJS function before returning it to WCTL.  If you don't, it will be blank because WCTL won't know what to do with it.

And now, the potholes:
  1. Any SSJS function mixed with WCTL macro needs to be constructed so that it concatenates the page response to a ByteBuffer and then returns the ByteBuffer at the end of the function.  If you don't do that, you run the risk of A Very Weird Thing(tm) when the page elements come out in the wrong order because of differences in the way the response buffer is handled in WCTL and SSJS.

    So do this:

    var bb = new ByteBuffer();
    bb += "Hello, world!";
    return bb;


    And not this:

    + "Hello, world!";

  2. Usually, WCTL and SSJS will track the "current location" such that the value of %% location %% in WCTL will be the same as the SSJS location.storedUniqueId. However, once in a while, they can get out of synch. In any case, if you are having problems with the current location not being what you think it should be, you can set it in WCTL setPath( someLocationId ) or in SSJS location = someNodeObject. In SSJS if you only have the unique ID of the location you'll need to get the object from the unique ID first:

    location = Node.lookup( 'someUniqueId' );
So off you go, on your journey. Keep the intermix just right and you'll get along just fine.

Function vs. command?

You may have noticed that in some of our SSJS examples, we are writing functions named "function xyz()" and sometimes we are writing functions called "command xyz()" and you may be wondering what the difference is.

It's actually pretty simple. Commands can be called from the browser location bar in an URL and functions can't. In nearly all other respects they are identical.

This has some usage implications, though. If you want to add an extra level of security to a routine you wrote, name it as a function, and people won't be able to execute it from the browser directly.

And there are two more trivia facts you should know about functions and commands:

  • Some filters have to be named as commands or they won't work.
  • Commands can be called from WCTL with the usual WCTL "use" syntax rather than the more convoluted JS call syntax:

    %% use myCommand %%

    rather than

    %% "myCommand".jsCall() %%

And that's it. As you can see, in most cases it doesn't make a huge amount of difference which you use.

How to write CSS for a dynamic site

I have done my share of custom facelifts for Webcrossing sites.  Webcrossing is (justifiably) known for being extremely customizable, and many customers are excited that they can pretty much design what they want.

What this means in practical terms is:
  1. Customer contracts with web design firm to design the site
  2. Designer designs something awesome
  3. HTML/CSS jockey translates that vision into actual HTML and CSS, typically creating at most a handful of critical representative pages to be used as style examples for all other pages
  4. HTML/CSS is handed over to us to integrate into Webcrossing
  5. We carve it up into pieces corresponding to the built in functions we want to override, and new functions we need, and set about making it dynamic
"Dynamic" is, of course, the operative word here.  Dynamic means that you have to be prepared for anything. Titles may not be same length as in the jpeg mockups from the designers, numbers might have any number of digits, usernames might be unreasonably long, and users may not have used semantic or even halfway correct HTML in their posts. Besides those issues, there are large chunks of UI (that you've probably never seen) that have to look nice, but which will not specifically be styled by your shiny new CSS.

What's a CSS-ologist to do?

Fear not. Here are my suggestions for writing CSS for a dynamic site, gleaned from experience with various integration projects:
  1. It seems obvious, but from my experience I guess it isn't: Don't assume (unless you have asked and received confirmation) that any bit of user-provided text or other dynamic values (usernames, post view counts, posts, user bio material, etc.), are going to be exactly the same length. Provide a way for either graceful expansion or graceful truncation.
  2. Remember that the site will have large chunks of UI and lots of pages which are controlled by your CSS but which you aren't styling directly. In other words, try to ensure that the default font size is reasonable, that the default styling for lists is reasonable, that headings and their various margin and padding values are OK without specific styles being applied to them, and that user-generated HTML looks presentable as is.
  3. Be careful with "reset" CSS. It is undeniably useful, but it may remove defaults like required table borders or destroy the look of lists on existing pages that were not scheduled to be reworked.
  4. Don't write your selectors in a way that various styles can't easily be re-used elsewhere on the site on pages you haven't specifically styled. For example:

    #banner .nav-links ul#sub-navigation { ... }

    is impossible to reuse anywhere except on a <ul> with an id=sub-navigation inside the banner and inside the nav-links class within the banner. So if I want to re-use that look elsewhere on a <ul>, guess what, I have to rewrite some CSS. 
  5. Don't make containers a fixed height if you aren't sure what is going to go in them. 
  6. Be careful with fixed widths on containers if they might be re-used elsewhere.
  7. If the site has user-provided photos or avatars, don't assume they will always be a certain size or even that they are all the same size.  At least inquire what the status is of current user photos and what the site owners' plans are going forward. Will they all be replaced with images of the correct new dimensions? Or are you obligated to attempt to provide a way for old avatars to still look OK even if they are 100 wide by 200 tall and the space you have allocated is 150 by 150.
  8. If you use sprites for icons, leave enough vertical space between icons in the sprite file so if the title next to which the icon sits wraps to two or even three lines, you won't have stray icons appearing on lines 2 and following.  If you don't, I have to break out that icon into a separate file and then call that individual icon in the CSS instead of the sprite file, which obviously defeats the purpose. Or alternatively, using Photoshop, I have to move around the icons in the sprite file and then rewrite a bunch of CSS sprite background positioning rules to adjust for that.  And that is unlikely to happen, honestly. 
  9. On a site where users are allowed to post HTML, don't surround user post bodies with <p>...</p> tags.  That page's HTML will not be anywhere near valid if the user puts headings, divs, tables, etc. inside those measly paragraph tags.  Make a nice div instead to surround your user post bodies, so users can put whatever they want to inside there. 
If the HTML and CSS experts are aware of the issues involved with designing a dynamic site, the customer can probably save some money on integration costs because there is less re-working of the CSS after the fact.

Filters, part 1

You know how a dynamic web page works, of course. The browser sends a request containing a URL and maybe some other data, the server builds a response based on the data it gets, and sends it back. And somewhere in there the server might also change the contents of the database backend. When you want to do something a bit outside the normal process, getting the server to do exactly what you want it to can sometimes be a challenge.

With Webcrossing you have powerful tools that allow you to take over processing at multiple points during the life cycle of the request. This power comes from a set of special script routines called "filters". Dave's already written about the authentication filter, but that's only the tip of the iceberg.

Probably the most-commonly used of the filters are the add- and edit-, -pre and -post filters. Each of the principal built-in node types has them. So, for example, for discussions there are

discussionAddFilterPre
discussionAddFilterPost
discussionEditFilterPre
discussionEditFilterPost

Let's focus on the add filters for right now.

Most filters can be written in either WCTL or SSJS. Basically, if a filter routine exists in your script codebase, it gets called at a specific point in a specific operation; on entry it has access to certain relevant data which it can read and/or alter. You can even abort normal processing and return your own response page.

Here's a simple example:

%% macro discussionAddFilterPre %%
<html><head></head><body>%% formGetNameValues %%</body></html>
%% endmacro %%

This routine, written in WCTL, will execute after the "Add Discussion" form is submitted but before the discussion is created. Because it starts with <html> it will abort normal processing and return its contents directly to the requestor. In the case, the directive %% formGetNameValues %% is just a list of all name/value pairs from the submitted form.

But you need not abort processing. You could look at various form values and take an action based on that. For example,

%% macro discussionAddFilterPre %%
%% if getValue("title") == "Puppies" %%
%% setPath ("/Kittens") %%
%% endif %%
%% endmacro %%

This filter examines the value of the discussion form's title input field. If the title is "Puppies", the current location is changed from wherever it is to the folder named "Kittens". The discussion will be created, but in a different place than perhaps where the user intended.

The -post filters work similarly, but they execute after the item is created, not before. Here's a quick example, written this time in SSJS.

%% command discussionAddFilterPost() {
    if (location.nodeAuthor.userName == "Joe Obnoxious") {
        email("<sysop@somewhere.com">, "Oh no, it's that obnoxious guy again, posting at " + location.nodeUrl);
    }
} %%

This filter looks at the name of the author of the just-created discussion Node (referenced by the location object) and in the case of a particular user, sends the sysop an email which contains a link to the new discussion (location.nodeUrl).

This brief article only scratches the surface of the customization potential and processing power of filters. I will write more about them in future posts.

Everything in moderation

Moderation is, well, you know, when you try to keep people from saying stuff you don't want them to.  Like four-letter words.  You could also use it when your paranoid boss wants to keep all your users from even mentioning the name of Sproingfarb Widgets, your Evil Competitor, but that's not generally encouraged.

Everything starts with the objectionable words list, which you can set in the Control Panel.  Objectionable words can also be set on a folder-by-folder basis, in case you have a site where you have significantly different constituencies posting within two different parts of your site.

Webcrossing has never provided an official objectionable words list, because what is objectionable in one group might be perfectly acceptable elsewhere.  For example, letting fifth grade boys discuss "breasts" might not be what you want to do, but forbidding it in a site full of breast cancer survivors would be just plain dumb.  So it's up to you to be creative, depending on your users, subject matter, and how much you can trust them to act like halfway civilized adults.

Once you have the objectionable words defined, there are various levels of enforcement available within Webcrossing, ranging from total control to a mild "tsk tsk."
  • Nothing is checked by a human before posting, but objectionable words are replaced by the likes of @#$%

    This is the "tsk tsk" option.  It really is just a reminder, because the actual replaced word is available in the HTML source and email notifications and elsewhere the message is re-shown.  So don't use this if you need a bullet-proof solution.
  • Only posts Webcrossing thinks might include words from the objectionable words list are checked by a human

    This is a pretty good middle ground.  The moderation machinery examines each post to see if it contains a word in the list.  If it thinks there might be one, it holds up the post for approval by a human.  So only questionable material gets held up from being posted immediately.  From the moderation administration screens, the human can edit the post and release it, leave it as is and release it, keep it in the queue for somebody else to decide on, or delete it.
  • Every post needs to be approved by a human before it appears on the site

    This clearly provides the most protection, but it can be very off-putting to people if their message doesn't appear instantly.  Plus sometimes people get confused and will repost - and repost - and re-repost - in an effort to see their post.  But since you don't actually need an objectionable words list because every post is flagged for moderation, it can save your delicate eyes and ears from creating a list containing all that nastiness.

So whatever you choose, do it - (wait for it; you know what's coming).... in moderation!

Robust gatekeepers

Webcrossing has always been known for robust access controls. Out of the box, you can control who can view, who can post, who is moderated, and who is an administrator. Plus, all of this is conveniently under scripting control as well.

There are 5 levels of access:
  • Host: hosts have administrator access to view everything, post anywhere exempt from the moderation controls, add anything, edit or delete anything belonging to anybody, move content around, change settings, and in general, play God. Or at least sub-God.
  • Participant: these are your normal, run of the mill users who can post messages and read all the usual stuff.
  • Moderated: these users can do everything a participant can, except his or her posts are run through the moderation machinery to check for objectionable words
  • Read only: these people can read posts, but not post anything of their own.
  • No access: these folks can't get into places where they are marked no access. In fact, we don't even show them the titles. None of that "Nyah, nyah, this looks like a cool title but you can't get there. Nyah Nyah." On the contrary, from their perspective, it simply doesn't exist.
Access is attached to a location (folder, discussion, etc.) in the forum hierarchy, and is inherited from parent levels unless another access list is set somewhere further down in the hierarchy.


You can give access to an individual user, or a designated group of users. This makes maintaining lists of, say, class members, much simpler. You can give different access to registered users vs. guest users. For example, you might let registered users post messages, but not guests.

So, some real-world examples.
  • You could have an "Announcements" discussion where everyone was read-only, and administrators posted notices.
  • You could have a panel discussion area where a panel of invited experts would be set as participants to discuss an issue while everyone else was read-only.
  • You can set troublemakers to read-only or no access.
  • You could create a user group consisting of your under-13 users and set them to moderated, so that their posts could be checked for revealing personal information.
  • You could create a special private discussion for each member of a class, so they could ask the instructor private questions. Everyone else would be sets to no access.
  • You could set all guest users to moderated (to check for spam posts) while letting your registered users post without moderation.
  • With scripting, you could create a special portal or landing page for users, and give them host access within their own space.

As you can see, the possibilities are endless. Webcrossing access controls are robust, flexible, and work out of the box without any scripting intervention.

Authentication Filters - Part 2: Cookies

Cookies, Cookies, Cookies - can't have a decent web experience without them. In the past, cookies got a bad name and went through a period where they were completely unusable to track users on a website.

Times have changed and cookies are no longer the evil baked goods of old. Cookies are one of the ways that web servers can track the stateless HTTP requests across multiple pages. WebCrossing can utilize this information that might be passed from a main web server like this graphic illustrates:



Visitor comes to site, main web server sets cookie, visitor navigates to WebCrossing forums, reads cookie, sends cookie information to main web server and gets validation in return.

This is a very common method of authentication and it is performed in the authentication filter like so. We will begin with just getting the cookie in the first place.
1 %% macro authenticateFilter   %%
2 %%  if userIsSysop     %%
3 %%    return      %%
4 %% endif      %%
5 %%  if form.backdoor == "42"    %%
6 %% set id userLookup("sysop")   %%
7 %% clearoutput%%%% id %%%% return  %%
8 %% else if userIsUnknown || userIsGuest  %%
9 %%// Get Cookie and use it to lookup or create a user and then log them in %%
10 %% set _email envirCookie( "auth_email" ).fromURL %%
11 %% set id userLookup( _email ) %%
12 %% if id %%
13 %%   clearoutput %%%% id %%%%return%%
14 %%  else %%
15 %%  set id userCreate( _personid ) //create the user %%
16 %%  if id  %%
17 %%    set password randomString  %%
18 %%    id.setUserPassword(password) %%
19 %%    id.setUserEmail(_email)  %%
20 %%    id.setUserMailbox(_email)  %%
21 %%    id.setUserForwardTo(_email)  %%
22 %%    clearoutput %%%% id %%%%return%%
23 %%  endif  // id %%
24 %%  endif %%
25 %% clearoutput %%
26 HTTP/1.0 302 Redirect%% crlf %%
27 Location: http://your_main_site.com/login %% crlf %%%% crlf %%
28 %% endif   // userIsUnknown || userIsGuest %%
29 %% endmacro %%

While this might be a little confusing, let me explain each line and what is happening.
Lines 2-4 - This is just a bypass so that the sysop does not have to login and bypasses the filter in this case. Mostly for a performance reason
Lines 5-7 - This is a type of "backdoor" that I like to add to these types of filters because they are easy to lock yourself out of the site. To use something like this, just issue a URL like this to get into the sysop control panel
http://your_site.com/?59@@!backdoor=42
The key=value pairs after the ! are turned into the properties of the form object in WCTL such that you can access the value by just using the dot notation of the key as in line 5. Line 6 looks up the sysop and line 7 clears all the data in the response buffer and then returns the ID of the sysop that was previously found. The clearoutput is important because these filters only allow certain data to be returned and extra data in the buffer will cause the filter to fail.

Lines 8 - 23, this is where the main work is done. It starts on line 8 where it checks to see if this new request is already known to WebCrossing or not. IF they are, then this is bypassed completely and the filter just returns nothing and allows normal processing to occur, which is to show the next page to the logged in person. This eliminates all the checking on each request for performance reasons.
Line 10 finally gets the cookie called "auth_email" that was presumably set by the main server. It then uses that information to lookup the user in the database, assuming that your users are stored by using their email address as their "username".
If the user is NOT found, then the process of creating a new user with these credentials is executed on line 15. This automatically keeps your WebCrossing database in synch with the external authentication system on your main server. New users are created automatically.
Lines 16-21 are just ways of setting the new user's personal information at the time of creation. Any type of user property can be created and added at this time.
Lines 22 - Finally this new user's ID is returned which essentially logs them into the WebCrossing site.

If none of the returns happen and we fall through to line 25, then it means that the user has no cookie or we could not find them in the user database. Therefore we must redirect them back to the main web server to login there and get a cookie set. It is then the job of the main server to present a form for login credentials, set a cookie, and then send them back to the WebCrossing server.

Whew! All that just to login with a simple cookie. There are some obvious security holes with this method, but they are easily fixed through the use of encrypted cookies.

We will discuss more options of authentication filters in subsequent posts.
Dave Jones
dave@lockersoft.com
http://www.youtube.com/lockersoft

Funday Monday: Prize-Winning Cat


This is the late, great Pinkwater Bonaduce, the only cat (to my knowledge) to ever win a prize at a science fiction convention masquerade, for her portrayal of Pyanfar Chanur. C. J. Cherryh was a judge that year. I wish I had video of the way she busted out laughing when we presented "Honey, I shrunk the Hani."


Search Me

Webcrossing offers several different methods to search for items in the database. Best-suited to a  community site is the nodeSearch method of the Node object (also accessed through the standard search page). This performs a full-text, indexed hierarchical search of the entire database or a selected part of the hierarchy. Search keywords are ORd by default, but exact-phrase, AND, and NOT’d searches are also allowed. Granularity (whether to include individual Message nodes in the result list, or to return the parent Discussion node only one time) and various other parameters can be set. By default, results are returned in order of descending relevance, and the user’s access privileges are automatically applied.

nodeSearch returns a 2-element array: array[0] is a blank-delimited list of storedUniqueIds that match the search spec, and array[1] is the total number of matches. This is important because, by default, nodeSearch only returns a small number of the total results. That number, as well as a skip index, are parameters. This is because nodeSearch is really designed to return a paginated result set. Result sets can be made arbitrarily large, but there is a performance penalty.

Arbitrary text or values can also be appended to any database node and indexed, using Node.nodeSearchExtra. You could, for example, add the author’s city of residence  to make postings from a given city easily searchable.

Webcrossing also implements a tagging engine. Any database node can be tagged with name:value pairs, and the Tags object gives you amazing flexibility and power in retrieving tagged items. For maximum performance, the entire tags index is retained in memory; tag searches are blazingly fast. The drawback to the tagging engine is that you do need some scripting skills to use it; there is not yet any default user interface written.

I'll write more about the tagging engine in a future post, but if you want to learn more about it now you can read more detailed documentation (pdf).