SGML and XML are metalanguages - languages for describing other languages - which let users design their own customized markup languages for limitless different types of documents.
SGML is very large, powerful, and complex. It has been in heavy industrial and commercial use for over a decade, and there is a significant body of expertise and software to go with it. XML is a lightweight cut-down version of SGML which keeps enough of its functionality to make it useful but removes all the optional features which make SGML too complex to program for in a Web environment.
HTML is just one of the SGML or XML applications, the one most frequently used in the Web.
The Web is becoming much more than a static library. Increasingly, users are accessing the Web for 'Web pages' that aren't actually on the shelves. Instead, the pages are generated dynamically from information available to the Web server. That information can come from data bases on the Web server, from the site owner's enterprise databases, or even from other Web sites.
And that dynamic information needn't be served up raw. It can be analyzed, extracted, sorted, styled, and customized to create a personalized Web experience for the end-user. To coin a phrase, web pages are evolving into web services.
For this kind of power and flexibility, XML is the markup language of choice. You can see why by comparing XML and HTML. Both are based on SGML - but the difference is immediately apparent:
<p>Apple Titanium Notebook
<br>Local Computer Store
Both of these may look the same in your browser, but the XML data is smart data. HTML tells how the data should look, but XML tells you what it means. With XML, your browser knows there is a product, and it knows the model, dealer, and price. From a group of these it can show you the cheapest product or closest dealer without going back to the server.
Unlike HTML, with XML you create your own tags, so they describe exactly what you need to know. Because of that, your client-side applications can access data sources anywhere on the Web, in any format. New "middle-tier" servers sit between the data sources and the client, translating everything into your own task-specific XML.
But XML data isn't just smart data, it's also a smart document. That means when you display the information, the model name can be a different font from the dealer name, and the lowest price can be highlighted in green. Unlike HTML, where text is just text to be rendered in a uniform way, with XML text is smart, so it can control the rendition.
And you don't have to decide whether your information is data or documents; in XML, it is always both at once. You can do data processing or document processing or both at the same time. With that kind of flexibility, it's no wonder that we're starting to see a new Web of smart, structured information. It's a "Semantic Web" in which computers understand the meaning of the data they share.
A DTD is a formal description in XML Declaration Syntax of a particular type of document. It sets out what names are to be used for the different types of element, where they may occur, and how they all fit together.
The XML Specification explicitly says XML uses ISO 10646, the international standard 31-bit character repertoire which covers most human (and some non-human) languages. This is currently congruent with Unicode and is planned to be superset of Unicode.
As the complexity of networks increases, with diverse systems and multiple infrastructure components such as varied routers and switches (all from different vendors and suppliers) – so managing these systems in a standard way become much more difficult. The network might run on a standard protocol but in any larger organisation a whole host of subsystems and protocols will exist. This can be a nightmare to manage for both support teams and application developers seeking to get their systems to run correctly within the environment.
SNMP – the Simple Network Management Protocol seeks to provide some common framework to control all these network elements. It’s core function is to divide the network into components – manager and agent to define these elements and centralize control and monitoring between diverse systems. It’s quite a simple protocol which operates on a request-reply basis, i.e an SNMP manager and an SNMP agent. The variables defined by the agent are included in the management information base (MIB) which can be set or queries by the manager.
The variables are in turn are identified by object identifiers which are arranged in a hierarchical naming scheme. These are normally very long numerical values which are abbreviated into a simple name specifically for support staff to be able to read. These are further divided, for example to control many routers from a specific vendor by assigning object identifiers to each instance.
There are lots of groups of SNMP variables, such as system, interface, address translation, IP, ICMP, TCP and UDP for example. These can be used to either manage or query specific devices on a network by utlising these groups. You can use the queries to get information about any aspect of the network such as requesting an MTU or querying for the correct IP addresses of a specific device (note this could be fake – watch this)
The other key function of SNMP is that of SNMP traps, which is a way for the agent to notify the manager that something significant has happened. This is of course essential in order to effectively manage a network properly and effectively identify problems before they cause a significant problem. These traps allow the agent to communicate with the manager where as the majority of the communication flows from the manager to the agent in the form of controls and queries. Usually these SNMP traps are sent to UDP port 162 on the managing device, these used to be in the clear and could be intercepted but the later versions such as SNMPv2 provide some levels of authentication and privacy. This secrity could be supplemented by allowing the support and admin staff to use a VPN especially when accessing the manager remotely from outside the internal network over the internet.
Any circuit level tunneling through a proxy server such as SOCKS or SSL, will allow most protocols to be passed through a standard proxy gateway. Whenever you see a statement like this, you should remember that it implies that the protocol is not actually understood but merely transparently transmits it. For instance, the popular tunneling protocol SSL is able to tunnel virtually any TCP based protocol without problem, it’s often used to add some protection for weak protocols like FTP and Telnet.
But it can create a little bit of a headache for a proxy administrator. Not only can all sorts of protocols be allowed access to a network but often the administrator has no knowledge of the contents due to encryption. There are some short terms solutions which will provide a limited amount of protection – for example blocking access based on port numbers. That is only allow specific ports to be tunneled such as 443 for HTTPS, 636 for secure LDAP. This can work well but remember some advanced security programs like Identity Cloaker allow the configuration of the port, allowing protocols and applications to be tunneled on non standard ports – best VPN software. It is therefore not an ideal solution and one that cannot be relied upon in the longer terms to keep a network and proxy secure.
The obvious solution of course is to utilise a proxy server that can verify the protocol that is being transmitted. This requires an awful lot more intelligence built into the proxies but it is possible. It does require a bigger overhead, it does make the proxy server more expensive and perhaps more complicated and trickier to manage. However without this sort of intelligence or something similar you will get the possibility of an FTP session being set up through an SSL tunnel for example.
In some ways proxies already do some of this, and protocols that are proxied rather than tunneled at the application level cannot be exploited like this. Examples include HTTP, FTP and even Gophur cannot be used to trick entry, simply because there is no ‘dumb’, direct tunnel the proxy understands and will only relay legitimate responses.
Zone transfers are an important part of distributing changes between name servers. Every domain on the internet (and within private networks for that matter) much have a master server which contains the definitive records of names and addresses for that domain. Zone transfers are the system which allows any changes on the master server be distributed out to the slave name servers which could be spread far and wide. It’s important that these are done regularly even if changes are not frequent if only to ensure the validity of the current name space.
For example when a slave name server restarts, or a periodic intervals it will contact the master server if possible and check for updated records. If the server finds updates then it will requests a zone transfer from the master server. This is simply a transfer of zone maps and DNS records from the master to the slave name server, it performs the core function for keeping a DNS service up to date. What is different from the majority of DNS transaction, the protocol used in this instance is TCP. The main reason is that a Zone transfer will potentially contain a huge amount of data in many instances and to ensure reliable delivery TCP is the best transport mechanism that is usually available.
The Zone transfer is obviously a huge target for any hacker who wants to attempt to compromise a domain or specific server. Being able to intercept or even modify zone transfers gives an attacker the potential to take over any system. Obviously modifying addresses will be difficult for any attacker but even intercepting these transfers can be very dangerous. A zone file includes the details of every device on a specific network or domain, all ip addresses assigned to every device. Some of these hosts will typically be non-internet facing for security reasons, so it’s important that zone transfers are secured. This relies on configuration of the name servers themselves, and in particular how zone transfers are accomplished. In versions of BIND 4.9.4 and later for example you can modify parameters to specify only certain ip addresses or subnets to be authorised to both send and receive transfers. There are other useful security features implemented in later versions of BIND too.
For older systems, you should ideally look to update, however blocking traffic to port 53 which is the standard port for the transfer of DNS traffic could be viable. However these port blocking solutions can often cause other difficulties with specific applications and internal devices, you may very well end up blocking legitimate traffic and breaking applications. Like breaking the vice director’s international VPN which he uses to watch ITV in Spain, this sounds an unlikely scenario for DNS security measures but one that’s happened to me !!
The TCP window size is basically the method employed by the receiving client/host to inform the sender, what the current buffer size should be for all the data within that connection. It’s a flow control system which ensure that the receiving host doesn’t get overloaded with data, it’s very important that this is a dynamic figure which allows for various receive rates based on all sorts of outside factors such as network speed. For example the windows size will become much smaller when data has been received but not yet processed by the receiving host. If the buffer become full perhaps communicating with a fast VPN system, then the window will be set at zero, which informs the sender to temporarily stop transmitting data packets. When some of the data is processed and there is some room in the buffer then the receiving device will send a windows size update to resume the flow of data.
From this explanation we can see that most of the control of the TCP window size is controlled by the receiving host, this allows control of the TCP session and prevent the client becoming overloaded. It’s worth bearing this in mind because it’s probably a natural to assume that the data flow is controlled by the device sending, not the device receiving. In much networking analysis this principle holds which when you think about is entirely logical to ensure that both devices operate withing their own operational limits.
The TCP Window size is of course of special interest to hackers, security and intrusion detection analysts as it does give some very useful information about the client you are talking too. For instance if you use tools like Nmap, you can by firing data packets at an unknown system, fingerprint and identify the operating system by analyzing the response and how the TCP windows size is set. For example most Windows systems have initially defined default TCP Packet receives sizes set in the registry which will not normally change under normal circumstances. For Nmap and other fingerprinting tools, the TCP Window size is a useful way of identifying a client operating system with minimal interaction with the system. Some of the best VPN software also allows you to control the flow of data in order to manipulate and identify clients using the TCP Windows size.
It’s other useful attribute for security specialists is such as in the use of Honey pots and IDS systems like Snort and La Brea. La Brea can effectively slow down a connection from an attacker by modifying the TCP Windows size, in many ways it can thwart and attack or at least make it a much more time consuming and cumbersome task.
Any identity system which is automated needs some way of both creating and distributing authorization and authentication assertions. One of the most famous is of course Kerberos, which has it’s own methods for dealing with this requirement. However many digital systems are now starting to use SAML – the Security Assertion Markup Language – it’s becoming the de facto security credential standard.
SAML of course uses XML as a standard to represent security credentials, but it also defines a protocol for requesting and receivfing the credential data from an authority services (SAML based). One of the key benefits to SAML is that using it is pretty straight forward, this fact alone has increased it’s usage considerably. A client will make a request about a subject through to the SAML authority. The authority in turn makes assertions about the identity of the subject in regards to a particular security domain. To take an example – the subject could be identified by an email address linked to it’s originating DNS domain, this is just one simple example though.
So what exactly is a SAML authority? Well it is quite simply a service (usually online) that responds to SAML requests. These SAML requests are called assertions. There are three different types of SAML authorities which can be queried – authentication authorities, attribute authorities and policy decision points (PDPs). These types of authorities all return distinct types of assertions -
SAML authentication assertions
SAML attribute assertions
SAML authorization assertions
Although there are three different definitions here, in practice most authorities are set up to produce each type of assertions. Sometimes in very specific applications, you’ll find an authority that is designed to only produce a specific subset but this is quite rare especially in online applications – although they’re sometimes used as proxy authorisation – see this. All of them contain certain elements however like IDs for issuers, time stamps, assertion IDs, subjects including security domains and names.
Each SAML attribute request will begin using a standard syntax – <samlp:Request…..> – the content then would refer to the specific parts of the request. This could be virtually anything but in practice it’s often something straight forward like asking which department or domain an email is associated with.
Just like every other type of communication method that exists online, you can use encryption for securing XML documents. In fact it is recommended if possible that all important XML documents should be encrypted completely before being transmitted across the wire. The document would then be decrypted using the appropriate key when it reaches it’s correct destination.
There is a problem with this however, in that when you encrypt something you also obfuscate the entire message. This means that unfortunately some parts of an XML message will need to be sent using clear text only. Take for example SOAP messages, these are a format that computers use to exchange rpc (remote procedure calls) over the internet. Although you can encrypt certain parts of the SOAP message, at a minimum the headers must be in clear text otherwise intermediary devices would not be able to see routing and other important information.
The other alternative is to encrypt the channel itself, typically using something like SSL or SSH. This ensures that the message is protected in transit by encrypting the entire channel. However there is another issues here that channel encryption only protects the two endpoints, the message will otherwise be displayed in clear text. These problems were real issues for XML developers and to combat them – the XML encryption standard was developed.
The primary goal of this standard is to allow the partial and secure encryption of any XML document. The encryption standard, very much like other XML standards like the signature protocol has quite a lot of different parts. This is to enable the standard to deal with all sorts of different contingencies, however the core functions are quite simple and easy to follow.
Any encrypted element in an XML document is identified using the following element – , this element consists of two distinct parts -
An optional element that gives information. The element is actually the same one that is defined in the XML signature specification.
A element that can either include the actual data which is being encrypted inside the element. Alternatively it can contain a reference to the encrypted data enclosed in a element.
For instance XML encryption may be used in something like an online payment system which sends orders through an XML document. The order document may contain all the information about the order including sensitive information like the payment details, credit card numbers all contained in a element. In this example most of the order should be left in clear text so that it can be processed quickly, but the payment information should be encrypted and decrypted only when the payment is actually being processed. XML encryption allows this facility by ensuring the specific encryption of certain parts of the document – i.e the payment information.
If you run or work for a business that hires agencies or contractors to perform work audit recovery can be a very important topic. You may not have heard of it before but it could have you hundreds of thousands of dollars to millions depending on your field/spend. First let me start by describing what audit recovery is. When a contract is bid there is a chance you will be over billed at some point. This may be over billing on per diems, materials used, over time, and a lot of other expenses. When you are over billed you most likely have no idea that it has happened and just pay the bill without a second thought. However, it is the job of a contract auditor to find those mistakes. But there is now software that is making this process much easier.
Contract compliance software will analyze your accounting to look for irregularities in billing. It will then show you those irregularities so that you can follow up with your agency or contractor about any billing that may be in-correct. Best of all you can use it to better understand how to fix accounting policies in the future. You will find that as you implement the changes required your accounting becomes much better.
Japan is planning a brand new disaster warning system this year. It will be based on social networking sites and has been drafted in the terrible aftermath of the 2011 disasters. One of the major problems in the aftermath of a disaster is coordination and communication. Natural disasters tend to destroy telephone lines and standard emergency lines.
The internet is however a lot more resilient than a fixed copper wire connecting a phone system. There are many ways to connect to the internet and it is virtually impossible to completely destroy it’s infrastructure. We all have different ways to connect and many of us routinely use social networking sites every day.
The test will focus on the most popular social sites in Japan which are Twitter and a local site called Mixi. All these sites can be accessed by computers, laptops, mobile phones and a host of other devices. The test is initially to help create some ground rules for the communication and ensure you don’t end up with things like false disaster reports creating havoc and panic.
Obviously power cuts and broken telecoms infrastructure will still have an impact, but it is hoped that the resilient nature of the internet will be able to overcome some of these difficulties. The scheduled test will simulate a disaster and see how people use their mobiles and other devices to communicate.
There is great hope for these system and using something like Twitter does seem a sensible option for mass communication. There are issues that may have an impact in many countries though not least the increasing number of restrictions that are put on internet access.
There are worries that these filters that require people using proxies which will end up harming some of the advantages of the internet as a communication medium. This combined with the many companies who are imposing blocks and restrictions on a commercial basis. Even public funded companies like the BBC block access to their site – you need to use proxies to watch BBC Iplayer abroad too – see this watch iplayer abroad
Fortunately you don’t need a Japan proxy yet as they do not currently block or censor the internet to any great extent but many countries do. Whether this will pose to be real issue or not is hard to say, but it’s certainly a concern.
Technology is one of the most trending news today. When we hear about technology, we usually spend time to listen about it. We get intrigued when we hear about new technology. With many technologies these days, you probably wanted to have a safe and healthy environment as well. It is important that you don’t forget on the importance of environment once you have a developing society. Perl Critic showed that you can have progressive society even when we are now living in the technology era. You wanted to have an environment that is safe from any materials and items you are using these days. It is probably difficult to believe but most things released and invented have some sort of impacts onto the environment. Green energy is one of the most appreciating and beneficial technology. You can have some ways on how to utilize green energy that would help the environment and your home. Through using the information provided from here, you can have a good and safe environment. You can have good idea of ways in order to switch into green energy.
You can even try setting the air condition to run with 1 degree Celsius warmer throughout the summer and 1 degree Celsius cooler on the winter. Perl Critic can provide you some tips on how to use green energy. This technology really helps many people to save their money from their expenses. You will not notice this difference in temperature and you can save lots of energy and even your money. Additionally, the amount of carbon being utilized would reduce by around 14 percent. It really makes sense once you switch to energy-saving light bulbs from the traditional light bulbs. Don’t wait until your old light bulbs are all burned out. With this, you are saving your money as well as keeping your environment safe and you as well. Also, it is not a good idea to throw away good bulbs just to make the switch; you are also making a way to waste energy. It is very important that you know what you are doing when you really want to save your money. According to some people, the technology negatively affects the life of the people. It makes people lazy. However, this is just an insight to those people who are limited mind.
Perl Critic has started to incorporate green energy sources in your home. You need to cut back the amount of electricity you are using. Ensure that you don’t end up wasting power through leaving things turned on if you are not using them. Through this, you are more efficient with the use of your energy if you make the switch over to substitute energy sources. Recycling can be one of the easy tasks that can make a greener home. Recycling is one of the best ideas to cut the costs of energy. This is one of the reasons why technology plays a big part today. Perl Critic can help you save your money and you can have this through technology.
Ok so the actual day sounds a little contrived, but the Web needs a birthday at some point. So on this exact date, March 12th in 1989 Tim Berners-Lee the British physicist credited with the invention of the web wrote a short memo to his boss.
It was entitled – ‘Information Management: A Proposal’ and it contained details of how he would like to develop a way to share information using a network of computers. He suggested instead of using the standard hierarchical system which was commonly used in the scientific community – a web of individual and linked notes (such as references) could be used.
Two years later the first web pages appeared in 1991 and seven years later nearly 25% of the population of the USA were using the web. It’s a quite staggering take up rate, to reach the same level of penetration for example the Televisions took over a quarter of a century, electricity took over 45 years.
Our children can probably not even envisage life before the world wide web, and perhaps some of us older people feel the same. I’m not even sure how I would go about sorting out my house insurance, booking a holiday or buying christmas presents without the assistance of the internet.
The early days of the web, though were even more exciting, you were never quite sure where you’d stumble across next. Strange technologies like Gopher, FTP, Archie were used to navigate the web and you’d often come face to face with little communities of nerds who seemed lost forever. It’s all very mainstream now and nobody requires even the slightest technological knowledge to get online or find what they need. The only problem is that although the technology has become much more accessible, other forces are starting to put up barriers all the time.
For example do you not think it is incredible that I can sit with my laptop in a cafe barely two miles from the Canadian border and be blocked from watching CTV simply because of my location? Well it’s true, my IP address has determined that I am not allowed to watch the Canadian National News online however if I hop in my car and drive for ten minutes I’ll be fine. In reality people bypass these blocks – by using technology like this – Canadian TV Online
But it doesn’t seem right, not using a system which was designed to facilitate the unrestricted sharing of information for the common good. This situation is being dictated by big business and economics of course, information and media is bought and sold then has restrictions put on it’s use. There is another worry though, the fact that the free and open communication of the web is under even bigger threat from Governments.
Most of the world’s major governments seem involved in various forms of surveillance of the ordinary internet user. If we have used the internet, then we will have been spied on, logs are kept of what we do, what we say and who we speak to online. The reason is usually justified by ‘fighting crime’ or ‘defeating terrorism’ but the problem is that the fundamental democratic nature of the web is being undermined.
The amount of surveillance being undertaken by organisations like MI6 and the FBI is huge and all encompassing with little control of what this data is used for. SO much that for many us the use of security tools which hide our activities and mask our locations by using fake IP addresses seems essential for any level of privacy – like this.
Still the world is certainly a better place for the world wide web and indeed a somewhat smaller one – so happy Birthday !!!
SGML and XML are both languages that are used for defining markup languages. More specifically, they are metalanguage formalisms that facilitate the definition of descriptive markup languages for the purpose of electronic information encoding and interchange. SGML and XML support the definition of markup languages that are hardware- and software-independent, as well as application-processing neutral.
SGML is an International Standard, defined in the document ISO 8879:1986. Information Processing - Text and Office Systems - Standard Generalized Markup Language (SGML), as amended. A key philosophical commitment underlying SGML is separating the representation of information structure and content from information processing specifications. Information objects modeled through an SGML markup language are named and described (using attributes and subelements) in terms of what they are (from a defined perspective) not in terms of how they are to be displayed or otherwise processed.
XML (Extensible Markup Language) is a dialect of SGML that is designed to enable 'generic SGML' to be served, received, and processed on the World Wide Web. XML originated in 1996, as a result of frustration with the deployment of SGML on the Internet. The SGML family of standards that include SGML (the modeling framework), DSSSL (the transformation framework for presentation) and HyTime (the linking and timing framework) are ISO standards that proved difficult to implement and aroused little interest outside of specialist fields of expertise. XML simplified the requirements for implementation, with the specific intention of enabling deployment of markup applictions on the Internet.
Both SGML and XML supported by a suite of companion standards addressing such features as transformation, presentation, linking, and event triggering. A broad range of commercial and public-domain software has been developed to assist users with markup implementation.