Jude 1:3

The sin and doom of Godless men
3Dear friends, although I was very eager to write to you about the salvation we share, I felt I had to write and urge you to contend for the faith that was once for all entrusted to the saints.

Comments:
All of us intend to worship God.Many who disobey the words of God ,many who criticize but all we must do is follow the bible words for our salvation.Saint jude urge us to have faith in God.All we need is love of God and the words of God.

Tuesday, February 9, 2010

RFC 1918

In the Internet addressing architecture, a private network is a network that uses private IP address space, following the standards set by RFC 1918 and RFC 4193. These addresses are commonly used for home, office, and enterprise local area networks (LANs), when globally routable addresses are not mandatory, or are not available for the intended network applications. Private IP address spaces were originally defined in an effort to delay IPv4 address exhaustion, but they are also a feature of the next generation Internet Protocol, IPv6.

These addresses are characterized as private because they are not globally delegated, meaning they are not allocated to any specific organization, and IP packets addressed by them cannot be transmitted onto the public Internet. Anyone may use these addresses without approval from a regional Internet registry (RIR). If such a private network needs to connect to the Internet, it must use either a network address translator (NAT) gateway, or a proxy server.

The most common use of these addresses is in residential networks, since most Internet service providers (ISPs) only allocate a single routable IP address to each residential customer, but many homes have more than one networked device, for example, several computers and a printer. In this situation, a NAT gateway is usually used to enable Internet connectivity to multiple hosts. Private addresses are also commonly used in corporate networks, which for security reasons, are not connected directly to the Internet. Often a proxy, SOCKS gateway, or similar devices, are used to provide restricted Internet access to network-internal users. In both cases, private addresses are often seen as enhancing security for the internal network, since it is difficult for an Internet host to connect directly to an internal system.

Because many private networks use the same private IP address space, a common problem occurs when merging such networks, the collision of address space, i.e. the duplication of addresses on multiple devices. In this case, networks must be renumbered, often a time-consuming task, or a NAT router must be placed between the networks to masquerade the duplicated addresses.

It is not uncommon for packets originating in private address spaces to leak onto the Internet. Poorly configured private networks often attempt reverse DNS lookups for these addresses, causing extra traffic to the Internet root nameservers. The AS112 project attempted to mitigate this load by providing special blackhole anycast nameservers for private addresses which only return negative result codes (not found) for these queries. Organizational edge routers are usually configured to drop ingress IP traffic for these networks, which can occur either by accident, or from malicious traffic using a spoofed source address. Less commonly, ISP edge routers will drop such egress traffic from customers, which reduces the impact to the Internet of such misconfigured or malicious hosts on the customer's network.

Private IPv4 address spaces

The Internet Engineering Task Force (IETF) has directed the Internet Assigned Numbers Authority (IANA) to reserve the following IPv4 address ranges for private networks, as published in RFC 1918:

RFC1918 name IP address range number of addresses classful description largest CIDR block (subnet mask) host id size
24-bit block 10.0.0.0 – 10.255.255.255 16,777,216 single class A 10.0.0.0/8 (255.0.0.0) 24 bits
20-bit block 172.16.0.0 – 172.31.255.255 1,048,576 16 contiguous class Bs 172.16.0.0/12 (255.240.0.0) 20 bits
16-bit block 192.168.0.0 – 192.168.255.255 65,536 256 contiguous class Cs 192.168.0.0/16 (255.255.0.0) 16 bits

Classful addressing is obsolete and has not been used in the Internet since the implementation of Classless Inter-Domain Routing (CIDR) starting in 1993. For example, while 10.0.0.0/8 was a single class A network, it is common for organizations to divide it into smaller /16 or /24 networks.

Private IPv6 addresses

The concept of private networks and special address reservation for such networks has been carried over to the next generation of the Internet Protocol, IPv6.

The address block fc00::/7 has been reserved by IANA as described in RFC 4193. These addresses are called Unique Local Addresses (ULA). They are defined as being unicast in character and contain a 40-bit random number in the routing prefix to prevent collisions when two private networks are interconnected. Despite being inherently local in usage, the IPv6 address scope of unique local addresses is global (cf. IPv6 addresses, section "IPv6 Address Scopes").

A former standard proposed the use of so-called "site-local" addresses in the fec0::/10 range, but due to major concerns about scalability and the poor definition of what constitutes a site, its use has been deprecated since September 2004 by RFC 3879.

Link-local addresses

Another type of private networking uses the link-local address range codified in RFC 3330 and RFC 3927. The utility of these addresses is in self-autoconfiguration by network devices when Dynamic Host Configuration Protocol (DHCP) services are not available and manual configuration by a network administrator is not desirable.

In IPv4, the block 169.254/16 is reserved for this purpose, with the exception of the first and the last /8 subnet in the range. If a host on an IEEE 802 (ethernet) network cannot obtain a network address via DHCP, an address from 169.254.0.0 to 169.254.255.255 may be assigned pseudorandomly. The standard prescribes that address collisions must be handled gracefully.

The IPv6 addressing architecture sets aside the block fe80::/10 for IP address autoconfiguration.

Link-local addresses have even more restrictive rules than the private network addresses defined in RFC 1918: packets to or from link-local addresses must not be allowed to pass through a router. (RFC 3927, section 7).

Private use of other reserved addresses

Historically other address blocks than the private address ranges have been reserved for other potential future uses. Some organization have used them for private networking applications despite official warnings of possible future address collisions.

RFC References

  • RFC 1918"Address Allocation for Private Internets"
  • RFC 2036"Observations on the use of Components of the Class A Address Space within the Internet"
  • RFC 2050"Internet Registry IP Allocation Guidelines"
  • RFC 2101"IPv4 Address Behaviour Today"
  • RFC 2663"IP Network Address Translator (NAT) Terminology and Considerations"
  • RFC 3022"Traditional IP Network Address Translator (Traditional NAT)"
  • RFC 3330"Special-Use IPv4 Addresses" (superseded)
  • RFC 5735"Special-Use IPv4 Addresses"
  • RFC 3879"Deprecating Site Local Addresses"
  • RFC 3927"Dynamic Configuration of IPv4 Link-Local Addresses"
  • RFC 4193"Unique Local IPv6 Unicast Addresses"

RFC1918 Caching Security Issues

By Robert Hansen
Date: 08/06/2009

Preface: Intranets are intended to be secured from the outside by way of firewalls and other networking devices. Unfortunately, there has been a move towards non publicly-routable address space as a method of protection, rather than other methods of protecting private IP space. This paper will outline a number of flaws that can be exploited by an adversary because of the use of well known non publicly-routable IP address spaces.

Overview: One of the principle technologies employed by enterprises is the concept of non publicly-routable IP address space (otherwise known as RFC1918). RFC1918 as defined explains that one of the principle reasons people use it is to avoid the future IP exhaustion that IPv6 is intended to obviate. Unfortunately, it accomplishes this task by using the same set of IP spaces for everyone who uses this tactic.

     10.0.0.0        -   10.255.255.255  (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

The bulk of intranet IP space falls in 10.* and 192.168.*, and further, the bulk falls in 10.0.0.*, 10.1.1.*, 10.10.10.*, 192.168.0.* and 192.168.1.*. Narrowing the most likely subnets down to 1280 addresses (256 addresses * 5 subnets). This tends to lead to collision of IP space where two separate networks will look virtually identical from an IP perspective. There are many technologies that use private IP addresses as a method of securing themselves. Likewise the browsers have implemented the same origin policy which prohibits a server on the Internet from reading content on another server (this includes internal address space).

Because of caching issues within the browser, and other technologies that may use the IP address as the single factor of security, it becomes possible to create situations where the collisions can be used to an attacker's advantage, and even allow them to compromise internal networks.

The Attacks: There are a number of potential attacks that are possible, and many of them reside around trust relationships people have with third parties. One such instance is a VPN (virtual private network) connection between a good entity and one that intends to compromise the victim's network.

VPN and RFC1918 caching security issue
Fig 1. Click to enlarge

The first attack, as seen in Fig 1 is where a user is using a client VPN and connecting into a hostile network. The user's browser is thwarted into routing then visiting and caching many pages that would normally be reserved for internal addresses within their own network. Because of caching issues within browsers, there is no need to break the same origin policy - only to wait. Once the VPN connection is destroyed (assuming the routes are then broken) the user's browser then connects to their real RFC1918 addresses, which are now under the control of an attacker by way of a JavaScript back door (Eg: BeEF). This sort of exploit would be most often seen between two competitive companies that share information only occasionally, or between two companies in a partner/vendor relationship.

VPN and RFC1918 caching security issue
Fig 2. Click to enlarge

The second attack as seen in Fig 2, similar to the first, requires instead of it being two office networks, it is a user who is using a client VPN from a home office. Like the first example, caching within the browser allows an evil administrator to persist their JavaScript backdoor beyond the lifetime of the VPN connection. The administrator in this way could compromise the user's home network. This may effect administrators who want to compromise executive management's home networks, without leaving as large a trail as traditional malware.

VPN and RFC1918 caching security issue
Fig 3. Click to enlarge

The third and final VPN issue found in Fig 3 is between two sets of servers that are interconnected by way of a client VPN. Like the previous examples, the evil administrator can push routes for RFC1918 address space, and cause the remote server to re-route it's traffic over the Internet. This could affect database connections, APIs, email, SMB backups and so on, allowing a remote administrator to temporarily interrupt services, compromise servers and so on.

Man in the middle RFC1918 caching security issue
Fig 4. Click to enlarge

Another issue that falls outside of the client VPN issues described above would be a man in the middle attack scenario as seen in Fig 4. Most security experts would say once a man in the middle attack is in progress there is little point discussing the issue further, because the user is already completely compromised. While this is somewhat true, it doesn't necessarily give the attacker what they are interested in. For instance an internet cafe may provide the attacker with access to webmail or social networking accounts, but it may not give the attacker access to the user's home network or work network.

An attacker in this scenario could interrupt and modify HTTP requests and inject malicious iframes to the possibly intended RFC1918 targets. Because of the aggressive caching policy within the browser, the malicious JavaScript can be cached well beyond that session. Once the user leaves the Internet Cafe and then turns on their portable device within the context of another RFC1918 network, it is simply a matter of time before the user visits one of these pages, if they are an administrator or someone who has access to physical devices.

In all of these attacks an attacker could do research on all modern networking devices that use JavaScript includes, stylesheets or other objects that can embed JavaScript within them. By forcing these included files to be cached and include the malicious JavaScript client within them, security devices can be thwarted, without necessarily having to know which one the user uses ahead of time, and without having to have them logged in ahead of time (as would be the case with CSRF and DNS rebinding attacks). Instead, the attacker can simply wait, theoretically indefinitely for the attack to work. Realistically the exposure to attack is a limit that is dependent on the duration of the compromised user's cache.

Caveats and Defenses: This attack relies on a number of factors. Firstly, the first three attacks rely on the fact that VPNs can be told what to route. If the VPN can be limited to only route the IP spaces that both parties agree upon, this attack would quickly fall down, or at minimum would only be affective against the IP addresses that were allowed to be routed. All of these attacks require that the browser caches content and that that content persists beyond the initial request.

Additionally, most of these attacks could be thwarted by simply not using actual IP addresses, but rather fully qualified but internal domain names because this would require an attacker to have prior knowledge about the IP to DNS mapping. Also, the use of SSL/TLS on all internal devices would cause a mis-match error if the attacker attempted to cache the JavaScript over HTTPS. Removing all scripting and dynamic content from the browser is also an option although severely limiting as well. Ultimately, most of these issues, aside from the ones found in Fig 3, could be mitigated by simply removing persistent cache regularly, or upon the change of any routing information at the operating system level.

Conclusions: Relying entirely upon RFC1918 and built in security functions within the browser to protect users is futile. The browser's same origin policy does not apply if the IP address is the same, and RFC1918 by definition must be the same. VPNs should have an option to define static routes as to mitigate arbitrarily changed routes on the client in a possibly adversarial situation. Ultimately RFC1918 is a poor alternative to IPv6.

Thanks: Special thanks to James Flom for technical oversight and editing help, and to HD Moore and Amit Klein for inspiration and helping me think through some of these ideas.

Tuesday, January 19, 2010

T1(T CARRIER)


In telecommunications, T-carrier, sometimes abbreviated as T-CXR, is the generic designator for any of several digitally multiplexed telecommunications carrier systems originally developed by Bell Labs and used in North America, Japan, and Korea.

The basic unit of the T-carrier system is the DS0, which has a transmission rate of 64 kbit/s, and is commonly used for one voice circuit.

The E-carrier system, where 'E' stands for European, is incompatible with the T-carrier (though cross compliant cards exist) and is used in most locations outside of North America, Japan, and Korea. It typically uses the E1 line rate and the E3 line rate. The E2 line rate is less commonly used. See the table below for bit rate comparisons.

The most common legacy of this system is the line rate speeds. "T1" now means any data circuit that runs at the original 1.544 Mbit/s line rate. Originally the T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitates the synchronization and demultiplexing at the receiver. T2 and T3 circuit channels carry multiple T1 channels multiplexed, resulting in transmission rates of 6.312 and 44.736 Mbit/s, respectively.

E1(E CARRIER)

In digital telecommunications, where a single physical wire pair can be used to carry many simultaneous voice conversations, worldwide standards have been created and deployed. The European Conference of Postal and Telecommunications Administrations (CEPT) originally standardized the E-carrier system, which revised and improved the earlier American T-carrier technology, and this has now been adopted by the International Telecommunication Union Telecommunication Standardization Sector (ITU-T). This is now widely used in almost all countries outside the USA, Canada and Japan.

The E-carrier standards form part of the Plesiochronous Digital Hierarchy (PDH) where groups of E1 circuits may be bundled onto higher capacity E3 links between telephone exchanges or countries. This allows a network operator to provide a private end-to-end E1 circuit between customers in different countries that share single high capacity links in between.

In practice, only E1 (30 circuit) and E3 (480 circuit) versions are used. Physically E1 is transmitted as 32 timeslots and E3 512 timeslots, but one is used for framing and typically one allocated for signalling call setup and tear down. Unlike Internet data services, E-carrier systems permanently allocate capacity for a voice call for its entire duration. This ensures high call quality because the transmission arrives with the same short delay (Latency) and capacity at all times.

E1 circuits are very common in most telephone exchanges and are used to connect to medium and large companies, to remote exchanges and in many cases between exchanges. E3 lines are used between exchanges, operators and/or countries, and have a transmission speed of 34.368 Mbit/s.

Sunday, January 17, 2010

Proxy server

Schematic representation of a proxy server, where the computer in the middle acts as the proxy server between the other two.
In computer networks,
a proxy server is a server (a computer system or an application program) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server evaluates the request according to its filtering rules. For example, it may filter traffic by IP address or protocol. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it 'caches' responses from the remote server, and returns subsequent requests for the same content directly.

A proxy server has many potential purposes, including:

  • To keep machines behind it anonymous (mainly for security).[1]
  • To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server.[2]
  • To apply access policy to network services or content, e.g. to block undesired sites.
  • To log / audit usage, i.e. to provide company employee Internet usage reporting.
  • To bypass security/ parental controls.
  • To scan transmitted content for malware before delivery.
  • To scan outbound content, e.g., for data leak protection.
  • To circumvent regional restrictions.

A proxy server that passes requests and replies unmodified is usually called a gateway or sometimes tunneling proxy.

A proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet.

A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as load-balancing, authentication, decryption or caching.

Saturday, January 16, 2010

Digital Signal 3

A Digital Signal 3 (DS3) is a digital signal level 3 T-carrier. It may also be referred to as a T3 line.

  • The data rate for this type of signal is 44.736 Mbit/s.
  • This level of carrier can transport 28 DS1 level signals within its payload.
  • This level of carrier can transport 672 DS0 level channels within its payload.

Cabling

DS3 interconnect cables must be made with true 75 ohm cable and connectors. Cables or connectors which are 50 ohm or which significantly deviate from 75 ohms will result in reflections which will lower the performance of the connection, possibly to the point of it not working. Bellcore standard GR-139-CORE defines type 734 and 735 cables for this application. Due to losses, there are differing distance limitations for each type of cable. 734 has a larger center conductor and insulator for lower losses for a given distance. The BNC connectors are also very important as are the crimping and cable stripping tools used to install them. Trompeter, Cannon, Amphenol, Kings, and Canare are some of the true 7.5 x 10 ohm connectors known to work. RG-6 or even inexpensive RG-59 cable will work in a pinch when properly connectorized, though it does not meet telephony technical standards.

Usage

The level of transport or circuit is mostly used between telephony carriers, both wired and wireless.

Digital Signal 0 (DS0)

Digital Signal 0 (DS0) is a basic digital signalling rate of 64 kbit/s, corresponding to the capacity of one voice-frequency-equivalent channel.[1] The DS0 rate, and its equivalents E0 and J0, form the basis for the digital multiplex transmission hierarchy in telecommunications systems used in North America, Europe, Japan, and the rest of the world, for both the early plesiochronous systems such as T-carrier and for modern synchronous systems such as SDH/SONET.

The DS0 rate was introduced to carry a single digitized voice call. For a typical phone call, the audio sound is digitized at an 8 kHz sample rate using 8-bit pulse-code modulation for each of the 8000 samples per second. This resulted in a data rate of 64 kbit/s.

Because of its fundamental role in carrying a single phone call, the DS0 rate forms the basis for the digital multiplex transmission hierarchy in telecommunications systems used in North America. To limit the number of wires required between two involved in exchanging voice calls, a system was built in which multiple DS0s are multiplexed together on higher capacity circuits. In this system, twenty-four (24) DS0s are multiplexed into a DS1 signal. Twenty-eight (28) DS1s are multiplexed into a DS3. When carried over copper wire, this is the well-known T-carrier system, with T1 and T3 corresponding to DS1 and DS3, respectively.

Besides its use for voice communications, the DS0 rate may support twenty 2.4 kbit/s channels, ten 4.8 kbit/s channels, five 9.67 kbit/s channels, one 56 kbit/s channel, or one 64 kbit/s clear channel.

E0 (standardized as ITU G.703) is the European equivalent of the North American DS0 for carrying a single voice call. However, there are some subtle differences in implementation. Voice signals are encoded for carriage over E0 according to ITU G.711. Note that when a T-carrier system is used as in North America, robbed bit signaling can mean that a DS0 channel carried over that system is not an error-free bit-stream. The out-of-band signaling used in the European E-carrier system avoids this.

software transactional memory (STM)

Software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It functions as an alternative to lock-based synchronization. A transaction in this context is a piece of code that executes a series of reads and writes to shared memory. These reads and writes logically occur at a single instant in time; intermediate states are not visible to other (successful) transactions. The idea of providing hardware support for transactions originated in a 1986 paper and patent by Tom Knight[1]. The idea was popularized by Maurice Herlihy and J. Eliot B. Moss[2]. In 1995 Nir Shavit and Dan Touitou extended this idea to software-only transactional memory (STM)[3]. STM has recently been the focus of intense research and support for practical implementations is growing.]

Performance

Unlike the locking techniques used in most modern multithreaded applications, STM is very optimistic: a thread completes modifications to shared memory without regard for what other threads might be doing, recording every read and write that it is performing in a log. Instead of placing the onus on the writer to make sure it does not adversely affect other operations in progress, it is placed on the reader, who after completing an entire transaction verifies that other threads have not concurrently made changes to memory that it accessed in the past. This final operation, in which the changes of a transaction are validated and, if validation is successful, made permanent, is called a commit. A transaction may also abort at any time, causing all of its prior changes to be rolled back or undone. If a transaction cannot be committed due to conflicting changes, it is typically aborted and re-executed from the beginning until it succeeds.

The benefit of this optimistic approach is increased concurrency: no thread needs to wait for access to a resource, and different threads can safely and simultaneously modify disjoint parts of a data structure that would normally be protected under the same lock. Despite the overhead of retrying transactions that fail, in most realistic programs, conflicts arise rarely enough that there is an immense performance gain[citation needed] over lock-based protocols on large numbers of processors.

However, in practice STM systems also suffer a performance hit relative to fine-grained lock-based systems on small numbers of processors (1 to 4 depending on the application). This is due primarily to the overhead associated with maintaining the log and the time spent committing transactions. Even in this case performance is typically no worse than twice as slow.[4] Advocates of STM believe this penalty is justified by the conceptual benefits of STM.

Theoretically (worst case behaviour) when there are n concurrent transactions running in the same time, there could be need of O(n) memory and processor time consumption. Actual needs depends on implementation details (one can make transaction fail early enough to avoid overhead), but there will be also cases (although rare) where lock based algorithms have better theoretical computing time than software transactional memory.

Conceptual advantages and disadvantages

In addition to their performance benefits, STM greatly simplifies conceptual understanding of multithreaded programs and helps make programs more maintainable by working in harmony with existing high-level abstractions such as objects and modules. Lock-based programming has a number of well-known problems that frequently arise in practice:

  • They require thinking about overlapping operations and partial operations in distantly separated and seemingly unrelated sections of code, a task which is very difficult and error-prone for programmers.
  • They require programmers to adopt a locking policy to prevent deadlock, livelock, and other failures to make progress. Such policies are often informally enforced and fallible, and when these issues arise they are insidiously difficult to reproduce and debug.
  • They can lead to priority inversion, a phenomenon where a high-priority thread is forced to wait on a low-priority thread holding exclusive access to a resource that it needs.

In contrast, the concept of a memory transaction is much simpler, because each transaction can be viewed in isolation as a single-threaded computation. Deadlock and livelock are either prevented entirely or handled by an external transaction manager; the programmer need hardly worry about it. Priority inversion can still be an issue, but high-priority transactions can abort conflicting lower priority transactions that have not already committed.

On the other hand, the need to abort failed transactions also places limitations on the behavior of transactions: they cannot perform any operation that cannot be undone, including most I/O. Such limitations are typically overcome in practice by creating buffers that queue up the irreversible operations and perform them at a later time outside of any transaction. In Haskell, this limitation is enforced at compile time by the type system.