SGML and XML are metalanguages - languages for describing other languages - which let users design their own customized markup languages for limitless different types of documents.

SGML is very large, powerful, and complex. It has been in heavy industrial and commercial use for over a decade, and there is a significant body of expertise and software to go with it. XML is a lightweight cut-down version of SGML which keeps enough of its functionality to make it useful but removes all the optional features which make SGML too complex to program for in a Web environment.

HTML is just one of the SGML or XML applications, the one most frequently used in the Web.

The Web is becoming much more than a static library. Increasingly, users are accessing the Web for 'Web pages' that aren't actually on the shelves. Instead, the pages are generated dynamically from information available to the Web server. That information can come from data bases on the Web server, from the site owner's enterprise databases, or even from other Web sites.

And that dynamic information needn't be served up raw. It can be analyzed, extracted, sorted, styled, and customized to create a personalized Web experience for the end-user. To coin a phrase, web pages are evolving into web services.

For this kind of power and flexibility, XML is the markup language of choice. You can see why by comparing XML and HTML. Both are based on SGML - but the difference is immediately apparent:


	<p>Apple Titanium Notebook
	<br>Local Computer Store


	<model>Apple Titanium Notebook</model>
	<dealer>Local Computer Store</dealer>

Both of these may look the same in your browser, but the XML data is smart data. HTML tells how the data should look, but XML tells you what it means. With XML, your browser knows there is a product, and it knows the model, dealer, and price. From a group of these it can show you the cheapest product or closest dealer without going back to the server.

Unlike HTML, with XML you create your own tags, so they describe exactly what you need to know. Because of that, your client-side applications can access data sources anywhere on the Web, in any format. New "middle-tier" servers sit between the data sources and the client, translating everything into your own task-specific XML.

But XML data isn't just smart data, it's also a smart document. That means when you display the information, the model name can be a different font from the dealer name, and the lowest price can be highlighted in green. Unlike HTML, where text is just text to be rendered in a uniform way, with XML text is smart, so it can control the rendition.

And you don't have to decide whether your information is data or documents; in XML, it is always both at once. You can do data processing or document processing or both at the same time. With that kind of flexibility, it's no wonder that we're starting to see a new Web of smart, structured information. It's a "Semantic Web" in which computers understand the meaning of the data they share.

A DTD is a formal description in XML Declaration Syntax of a particular type of document. It sets out what names are to be used for the different types of element, where they may occur, and how they all fit together.

The XML Specification explicitly says XML uses ISO 10646, the international standard 31-bit character repertoire which covers most human (and some non-human) languages. This is currently congruent with Unicode and is planned to be superset of Unicode.

The difference between HTML 4 and HTML 5

Data analytics and data miningA lot of people have been asking me, what the main difference is between the new HTML 5 format and the older HTML 4. Let’s take a look at some of the larger changes in this format.
One of the main changes, is how it defines error handling. Right now you can writte some pretty bad code, tweak it over and over, and you could still get your website displayed in a way that looked alright to the visitors. Even though the code behind it was full of malformed HTML and bugs. This makes it really hard to write a new webbrowser, as there are no clear rules on how errors are being handled. In HTML5, error handling has been improved quite a bit.

Another thing that have been improved is the Web Application Features
In HTML 4, we had to depend on 3. part software such as flash, java or silverlight to do many of the special features we needed. In HTML5 you can now do a lot of this directly through the browser with standard HTML, CSS or javascripts. For instance HTML now support tags such as <audio>, <video > or and also enables the programs to store data in a local storage area, outside cookies. A new input types such as date, for which the browser can expose easy user interface (Instead of the old js-based calendar date-pickers), and browser-supported form validation, a feature that will make developing web applications much simpler, and make them much faster for the users (since many things will be supported natively, rather than hacked in via javascript). Based on a lot of data analytics and data mining, the people behind HTML have found out what people needed the most, and implemented a lot of this into this new version. One site to embrace HTML 5 is, a blog about halloween kostumer, which was among the first danish sites to convert to a pure HTML 5 solution as you can see here.We also get Improved Element Semantics, so that tags such as xxxx and <em> now actually mean something different, and even <b> and <i> have vague semantics that should work well when parsing legacy documents.
<header>,<aside>, and <nav> should replace the majority of <div>s used on a web page, making your pages a bit more semantic, but more importantly, easier to read. No more painful scanning to see just what that random</div> is closing – instead you’ll have an obvious

</nav> </aside> </header>, or </section>
</article>, making the structure of your document much more intuitive. All things that aim to make HTML5 more simple and easier to read than the older versions.

A few if the key features would be:

  • New elements – section, video, progress, nav, meter, time, aside, canvas
  • New input attributes – dates and times, email, url
  • New parsing rules oriented towards flexible parsing and compatibility
  • New attributes – ping, charset, async
  • Deprecated elements dropped – center, font, strike
  • Global attributes – id, tabindex, repeat

The thought behind HTML5 was to make it a lot easier to bring good webpages to the users, without having to rely as much on 3. part applications such as Java or Flash, and to create a structure that would be easy to read and understand. Another thing is Speed. By implementing a lot more features into the language, you don’t have to reply as much as javascript, which could speed up a lot pages, as it’s now nativly supported by the browser.
Personally I really enjoy these changes, and feel that in a few more years, it will really begin to make a lot of sense, as more and more websites move to pure HTML5 solutions.

Automating Text Creation Using SGML

In 1995, the staff of the HTI began work on the American Verse Project, an electronic archive of American poetry. Although a few eighteenth century works will soon be comprised, the vast majority of the works are from the nineteenth and early twentieth centuries. The collection is both browseable and searchable. Users who just wish to scan the listing of available texts and read a poem can, and many do; a number of the works contained are hard to find outside of big academic libraries or are in very poor condition and don’t circulate, as well as their availability on the internet is a great boon to readers and researchers. For instance in some cases restrictions on copyright meant using an intermediary server to access, using an Irish proxy for example to access those limited to the Irish Republic.

Although a few eighteenth century works will soon be contained, nearly all the works are from the nineteenth and early twentieth centuries. The group is browsable and searchable. Users who only need to scan the listing of accessible texts and read a poem can, and many do; a number of the works contained are not easy to locate outside of big academic libraries or are in quite poor state and don’t circulate, as well as their availability on the internet is a remarkable blessing to readers and researchers. The capacity to search the set is useful for tasks as easy as locating a poem that starts with the line “Thou art not lovelier than lilacs” or as complicated as comparing examples of flower vision in early American poems in general.
The list was enlarged to include poets of special interest to American literary historians from the Department of English in Michigan. A listing of almost 400 American writers of poetry was gathered.

Working from this list, a survey of publications was made and an electronic bibliography of electronic and print variants was built. Several hundred titles from the Michigan group were assessed to decide whether they were within the range of the undertaking; texts were chosen and prioritized based on their scholarly interest as well as their physical properties (e.g., extent of deterioration and “scanability”).

Without being disbound the volumes chosen for inclusion in the American Verse Project are scanned; now, the HTI is using its Xerox Scan Manager applications for batch scan and the Xerox 620 scanner; BSCAN and a Fujitsu 3096 scanner have additionally been used. TypeReader is the software package mainly used for optical character recognition (OCR); it’s functioned very well for the acknowledgement of the older typefaces in the nineteenth century content and has an unobtrusive, user friendly proofing interface. A UNIX program accessible from XIS, ScanWorx, has been used less often; because it could be trained to recognize non-standard characters, like long s, it’s useful for the earliest volumes in the set. A program developed by Prime Recognition that uses up to five OCR engines to radically improve OCR accuracy, Prime OCR, is being assessed for potential use. The HTI gives an excellent deal of attention to precision in the digitization procedure, together with the premise that accessibility to electronic texts that are dependable is significant.

After a volume is in electronic form, automated routines are run to supply a first level of SGML markup, identifying clear text structures (lines of poetry, page breaks, paragraphs) and potential scanning malfunctions, such as left out pages. Mindful manual markup happens in the following period, using SoftQuad’s Author/Editor SGML editing program as well as the TEI’s “TEILite” DTD. The encoding staff of the HTI does equivocal markup introduced by the automatic labeling procedure is cleared up and markup also classy for automated routines or open.

After encoding, a formatted, printed replica of the text is used to evidence against the primary volume and for review of the markup by a senior encoder. All pictures discovered in the first volume suggested and are scanned in the encoded text; a picture of the title page and verso is, in addition, contained. Ultimately, complete bibliographic information, including the size of the electronic file as well as a local call number is contained in the header of the electronic text. A cataloger reviews the header as well as a record for the electronic text is made for the on-line library catalog in Michigan.

Further Information
Obtaining a French IP Address –

Retaining Logs – Legal and Policy Requirements

In most countries, there are legal and policy requirements about log retention. To comply with the leading audit standards like Sarbanes-Oxley, ISO-9000 and VISA CISP, then a corporate policy is a a necessity. One important requirement of this policy is the subject of critical system log archive retention, how these logs are stored and retained. Depending on which standard you need to compliance with will determine how things like event and SQL logs need to stored and for how long.

To meet these requirements, a corporate policy covering log file archiving and retention is essential. Many companies are adapting existing Syslog servers and storing the data on these for whatever duration is specified. Whilst others will copy the files over onto centralised file servers or share. The other main option is to move the logs on to some sort of backup disk or tape system for long term storage. This option often is useful in that it can be incorporated into a disaster recovery procedure or policy by moving the data off-site.

Whichever system is used, the base concept is centralizing logs from various systems into a single storage system. One advantage of this is that it moves the responsibility of the logs and the data they contain from the individual system owners onto a centralized system. This is much easier to manage and control, all the files can be controlled under a central policy rather than individual application requirements which often differ.

There are other benefits to the centralized storage model besides making policies easier. A practical advantage is that you have a single point to analyse for information from all a companies systems. You can use analytical tools to parse and filter information from all the logs at once.
For example using Microsoft’s free tool – Log Parser you could gather all the system start up events from all the systems in the environment.

There is another important reason that audit standards enforce the retention of system logs and that is for non-repudiation. This means that you can use the logs for proof that a transaction or process happened and cannot be reputed later. A simple example of this is the signing and transmitting of a digital message. If the message is signed then the recipient cannot deny receiving the message later, logs can be used to demonstrate this too.


Introduction to Object Technologies

Object orientated technology has brought software development past procedural programming and into a world where programmes can be readily reused. This of course vastly simplifies development of applications because any programmer can leverage existing modules quickly and efficiently. OPerating systems and applications are created as multiple modules that are slotted together to create a functional working program. This has many advantages but one of the major ones is that any module can be replaced or updated at any time without having to update the entire operating system.

It can be difficult to visualise these concepts, but just imagine your web browser as a container into which users can add objects that provide extra functionality. These users don’t need to be programmers either, anybody can download an object from a web server into the container. It could be something like an applet or an Active X component which improves or adds functionality. It could even be something that adds an extra utility to the browser, perhaps an app that performs currency exchange or looks up the websites address or page rank.

This is not new technology but it is slowly changing the world. It might be surprising to hear that Windows NT was actually built on this object orientated technology. Within this system printers, computers and other devices were viewed as objects. It is much easier to see within later versions that use the Active Directory as we can see it more clearly where even users and groups are classed as individual objects.

The definition of an object really is the crucial point in this development. It can be virtually anything, either a parcel of data, a piece of code all these with and external interface which the user can utilise to perform the function. The crucial point is that any, all or some of these objects can be readily combined to produce something of value to the user. All the objects can interact with each other by exchanging data or messages. The client server model which has served the technology space for so long becomes rather outdated to a point. Simply stated any object can become either a client or server (or even both).

Harvey Blount
@Good Proxy Sites

Internet VPNs

An internet VPN can provide a secure way to move data packets across the web if you have the right equipment. There are two basic methods for doing this.

Transport Mode – which is the technique of encrypting only the payload section of the IP packet for transport across the internet. Using this method the header information is left entirely intact and readable by hardware. This means that routers can forward the data as it traverses the internet.

Tunnel Mode – Using this method-IP, SNA, IPX and other packets can be encrypted then encapsualted into new IP packets for transporting across the internet. The main security advantage of this method is that both the source and destination addresses of the original packet are hidden and not visible.

With either case, Internet VPNs trade off the reliability and guaranteed capacity which are available using frame relays or ATM virtual circuits. However the comparative low cost of these internet VPN products make them really popular. You can get low cost UK based VPN for example at a very low costs. Most of the VPN service providers, have recognised the concerns around security and have made security and privacy a priority with product development.

Encryption obviously provides some level of security, however the other layer needed is secure authentication. It is essential that secure authentication protocols are used to ensure that the people or devices at each side of the link are authorised to use the connection.

There are numerous scenario types to these internet VPNs however most commercial ones come into two distinct groups. The first is the site to site connection, which is designed to tunnel large amounts of data between two specific sites. The second relate to virtual services which are usually dial up type connections from individual users generally into a corporate site.

Both of these method will normally both use a local connection into an ISP with the wide area sections covered by the internet. They are distinct options and will be used in very different circumstances. The ‘personal vpn’ market is growing every year particularly due to the increasing filtering and censorship which is becoming standard ion the internet. Using a VPN allows you to both avoid privacy issues but also to avoid the filters and blocks. In China a huge number of people use these services because of the increasing number of blocks employed by the Chinese State.

Harry HAwkins
Website here

Different Types of XML Parser

XML parser is a software module to read documents and a way to give access for their content. XML parser generates a structured tree to return the results to the browser. An XML parser resembles a processor that determines the construction and properties of the data. An XML document can be read by an XML parser to create an output to build a screen sort. There are numerous parsers available and some of these are listed below:

The Xerces Java Parser
The primary function of the Xerces Java Parser is the building up of XML-aware web servers
Also to ensure the integrity of e-business data expressed in XML, James Clark contributed this parser to the community.

XP and XT

XP is a Java XML and XT is an XSL processor. Both are written in Java.XP detects all non well formed files. It plans to function as most rapid conformant XML parser in Java and gives high performance. On the other hand XT is a set of tools for building program transformation systems. The tools include pretty printing; bundling


Simple API for XML (SAX) originated by the members of a public mailing list (XML-DEV).It gives an occasion based approach to XML parsing. It indicates that instead of going from node to node, it goes from event to event. SAX is event. Events contain XML tag, detecting errors etc, such as this reference.

It’s optimal for a small XML parser and applications that need fast. It must be used when all of the procedure must be performed economically and quickly to input components.

XML parser

It runs on any platform where there’s Java virtual machine. It’s sometimes called XML4J.It has an interface which allows you to have a chain of XML formatted text, decide the XML tags and rely on them to extract the tagged information.

Harold Evensen

Important Authentication Solutions -Kerberos

Kerberos is one of the most important authentication systems available to developers and network architects.  It’s aim is simple – to provide a single sign on to an environment comprising of multiple systems and protocols.  Kerberos therefore allows mutual authentication and importantly secure encrypted communication between both users and systems.   It’s different too many authentication systems in that it does not rely on security tokens but relies on each user or system to maintain and remember a unique password.

When a user authenticates against the local operating system, normally there is an agent running which is responsible for sending an authentication request to a central Kerberos server.  This authentication server responds by sending the credentials in encrypted format back to the agent.   This local agent then will attempt to decrypt the credentials using the password which has been supplied by the user or local application.   If the password is correct, then the credentials can be decrypted and the user validated.

After successful validation the user is also given authentication tickets which allow them to access other Kerberos- authenticated services.   In addition to this, a set of cipher keys is supplied which can be used to encrypt all the data sessions.  This is important for security which is especially relevant when dealing with a wide range of different applications and systems with a single authentication system.

After validation is completed also, no further authentication is necessary – the ticket will allow access until it expires.   So although the user does need to remember a password to authenticate, only one is required to access any number of systems and shares on the network.  There are a lot of configuration options to finely tune Kerberos particularly in a Windows environment where Kerberos is used primarily to access Active Directory resources.  You can restrict access based on a whole host of factors in addition to the primary authentication.  It’s effective in authentication in a fluid environment where users may log on to many different systems and applications, even when these systems can keep changing their IP address (note: )

There is one single reason that Kerberos has become so successful, it’s because it’s freely available.  Anyone can download and use the code free of charge, which means it’s widely utilised and is constantly developed and improved too.  There are many commercial implementations of Kerberos such as from Microsoft and IBM (Global Sign On) these normally have additional features and a management system.  There have been concerns over various security flaws in Kerberos however because it is open source these have all been fixed in the latest implementation Kerberos V.

George Hempseed

Author: BBC iPlayer in Ireland

Command Line Utilities for Troubleshooting DNS

There are of course, many tools for configuring, installing and troubleshooting DNS issues, many can make life an awful lot easier.   Anyway here’s some of the perhaps most popular ones which exist in various platforms.


This utility is probably the oldest and most widely used DNS tool available.  IT’s primary functions are to run individual and specific queries on all manner of resource records.  It is even possible to perform zone transfers using this tool, which is why it’s so important.


This tool is often used daily to release and renew DHCP addresses.  However it can also be used to perform some DNS functions, it’s certainly a useful client tool to get to grips with. There are a couple of very useful switches which supply DNS related functionality.  The /displaydns switch will return the contents of the client resolver cache.   It will show you the Record Name, TYpe TTL, Data Length and RR Data.  It will use cache data to return these records at least until the TTL expires when it will query a name server.  The /flushdns switch is used for erasing the contents of the resolver cache.  In troubleshooting terms this means that cached data will not be used and a fresh request will be sent to a name server.   Finally /registerdns which will refresh it’s DHCP lease and network records.


One of the most useful general diagnostic tools you will find in a Windows environment.  It performs a long list of network connectivity tests, including a specific DNS test.   Using the switch /test:DNS the program will check each active network card and see whether it has a A record registered in the domain.  The additional switch /DEBUG can be used in conjunction with this to produce a verbose output to the screen which is extremely helpful in troubleshooting DNS issues.  It can be found in the Windows support tools directory which is on the installation disks and shares.   It’s surprisingly useful when checking a DNS service or programs.


This useful utility is especially useful in checking through email issues that are DNS related. A DNS misconfiguration can cause all sorts of email issues as many have experienced.   It functions by simulating all the DNS related activities which would be done by an SMTP agent when delivering email  There is a caveat in it’s use for this sort of diagnostic work, you’ll need to run it on a computer which has either and Exchange or SMTP agent installed locally.

Most of these tools can be used to solve a huge range of DNS related issues, so they’re worth getting to grips with.  A great test is to use them with a new installation, or DNS design, perhaps run through the tools to check out that DNS is working properly.

Additional DNS Resource

DNS Messages

If you want to write programs that can utilize DNS messages then you must understand the format.  So where will you find all the queries and responses that DNS uses to resolve addresses?  Well the majority are mostly contained within UDP, each message will be fully contained within a UDP datagram.  They can also be relayed using TCP/IP but in this instance they are prefixed with a 2 byte value which indicates the length of the query or response.  The extra 2 bytes are not included in this calculation – a point which is important!

All DNS communication exists with a format simply called a message.  Every different function in DNS from simple queries to Smart DNS functions will all use this very same format.  The format of the message follows this basic template –

  • Header
  • Question    – For the Name Server
  • Answer  – Answering the Question
  • Authority – Point Towards Authority
  • Additional – Additional Information

Some sections will be missing depending on the query, however the header will always be present.  This is because within the header you’ll find fields which specify which of the remaining sections are indeed present, also whether the message is a query or a response and finally if there are any specific codes present.

Each name of the sections following the header are derived from their actual use, it’s all pretty common sense stuff.  The Question section is indeed a question directed at a Name Server, within this section are fields which define the question.

  • QTYPE – Query Type
  • QCLASS – Query Class
  • QNAME – Query Domain Name

Specifically if you are programming or developing any application which relies on this functionality like the best Smart DNS service for example it is important to understand these classes properly.   Also programmers will need to understand the specific format of the classes.  The QNAME represents the domain name being queried as a sequence of labels.  Each one of these labels consists of a length octet followed by a number.

A Primer on SNMP

As the complexity of networks increases, with diverse systems and multiple infrastructure components such as varied routers and switches (all from different vendors and suppliers) – so managing these systems in a standard way become much more difficult. The network might run on a standard protocol but in any larger organisation a whole host of subsystems and protocols will exist. This can be a nightmare to manage for both support teams and application developers seeking to get their systems to run correctly within the environment.

SNMP – the Simple Network Management Protocol seeks to provide some common framework to control all these network elements. It’s core function is to divide the network into components – manager and agent to define these elements and centralize control and monitoring between diverse systems. It’s quite a simple protocol which operates on a request-reply basis, i.e an SNMP manager and an SNMP agent. The variables defined by the agent are included in the management information base (MIB) which can be set or queries by the manager.

The variables are in turn are identified by object identifiers which are arranged in a hierarchical naming scheme. These are normally very long numerical values which are abbreviated into a simple name specifically for support staff to be able to read. These are further divided, for example to control many routers from a specific vendor by assigning object identifiers to each instance.

There are lots of groups of SNMP variables, such as system, interface, address translation, IP, ICMP, TCP and UDP for example. These can be used to either manage or query specific devices on a network by utlising these groups. You can use the queries to get information about any aspect of the network such as requesting an MTU or querying for the correct IP addresses of a specific device (note this could be fake – watch this)

The other key function of SNMP is that of SNMP traps, which is a way for the agent to notify the manager that something significant has happened. This is of course essential in order to effectively manage a network properly and effectively identify problems before they cause a significant problem. These traps allow the agent to communicate with the manager where as the majority of the communication flows from the manager to the agent in the form of controls and queries. Usually these SNMP traps are sent to UDP port 162 on the managing device, these used to be in the clear and could be intercepted but the later versions such as SNMPv2 provide some levels of authentication and privacy. This secrity could be supplemented by allowing the support and admin staff to use a VPN especially when accessing the manager remotely from outside the internal network over the internet.