Sunday, October 26, 2008

CHEAT IN GAME'S ONLINE

Cheating in online games are activities that modify the game experience to give one player an advantage over another player(s); depending on the game, different activities constitute cheating and it is often a matter of consensus opinion as to which particular activity or activities actually constitute cheating. Clive Thompson writes[1] that "Johann Huizinga, one of the first big philosophers of ludology -- the study of play -- defined cheating as when you pretend to obey the rules of the game but secretly subvert them to gain advantage over another player."

Cheating reportedly exists in all multiplayer online games but is difficult to prove[2]. The Internet provides players opportunity, means and methodology -- through anonymity and resources -- necessary to cheat in online games; however, darknets also provide access to cheat tools and methods.

Type of cheating

Lagging

By attaching a physical device (called a lag switch) to a standard Ethernet cable, a player is able to disrupt updates/communication from the server with the intent of tricking the game server into continuing to accept client-side updates (which remain unimpeded). This can have a variety of effects such as teleporting, delayed animation or fast-forwarded game play; however, the goal is to gain advantage over another player without reciprocation.

In the peer-to-peer gaming model, lagging refers to a player with a faster connection flooding an opponent(s) using a basic denial-of-service attack outside the game structure.

User settings

Typically, a player can change settings within a game to suit their preference, play-style and/or system; these alterations are considered cheating in certain circumstances. For example, changing the keyboard layout to make it easier to use is an accepted practice and not considered cheating; however, changing player models and/or textures, increasing the field-of-view, turning off or limiting particle effects, modifying the brightness and/or gamma are considered cheating when set to extremes


Exploits

Exploiting is the application of an unintended use or bug that gives the player an advantage. Not all gamers view exploits as cheating, some view it as another skill because certain exploits take a significant amount of time to find and/or dexterity/timing to use. Example dexterity/timing exploits include bunny hopping and texture-climbing in Quake. Even an official part of the series such as "skiing" in Tribes is considered an exploit by some. However, exploits are considered cheating when they have an unbalancing effect, are used in an unintended manner or not intended to be feature.

Ghosting

Most games allow other participants to observe the game as it is played from a variety of perspectives; depending on the game, perspectives allow an observer a map overview or attach a "camera" to the movement of a specific player. In doing so, the observer can communicate with an accomplice using a secondary communication methodology (in-game private message, 3rd-party or even off-line) to inform friendly players of traps or the position of opponents; an observer can be an active player, using a separate computer, connection and account.

Some systems prevent inactive players from observing the game if they are on the same IP address as an active player on the grounds that they are probably in close physical proximity; when all players from a single IP address are no longer active participants, they are all allowed to observe.

Binding

Binding involves reassigning a key to the mouse wheel or any other key (CAPS-LOCK) or combination of keys that allows a player to issue commands at a faster rate than the expected physical limitation of the player pressing the default key configuration sequentially. For example, assigning the "fire" command to the mouse wheel allows a player to shoot faster (generally with weapons that fire at the same rate at which the user clicks) when compared to the default "fire" key configuration. This is a subset of the user setting cheat.

Aimbotting and Triggerbot

An aimbot (sometimes called "auto-aim") is a type of computer game bot used in multiplayer first-person shooter games to provide varying levels of target acquisition assistance to the player. While most common in first person shooter games, they exist in other game types and are often used in combination with a TriggerBot, which shoots automatically when an opponent appears within the field-of-view of the player. Some TriggerBots are blatant while others attempt to hide the fact they are being used through a number of methods.

Wallhacking

Wallhacking allows the player to see through solid or opaque objects and/or manipulate or remove textures. When used in conjunction with an aimbot certain wallhacks allow the player to shoot through solid objects. A subset of the wallhack known as WhiteWalls removes the color/texture from objects in the surrounding environment, providing distinct contrast to the opposition's character models, which remain colored/textured. (See ESP for the evolution of the WallHack.)

Cham hacks, Chameleon skins, or chams for short, generally replace player models with brightly colored skins, often in neon red/yellow and blue/green colors. Chams are wallhacked skins that change color depending on whether the model is visible. For instance, If an enemy's head is showing, it would be a different color from the rest of the model, giving the user a very large advantage over non-cheat users, especially in games in which camouflage on player models is negated by the extremely bright colors or when the cheat user comes to walls which he or she may shoot through.

ESP

Extrasensory perception (ESP) in video games displays contextual information such as the health, name, equipment, position and/or orientation of other participants as navigation/directional markers. In military parlance, this is known as Battlefield Visualization and part of a larger trend toward Information Dominance.

Sharing

Sharing is when multiple people play using a singular character -- mainly in MMORPGs -- to gain an advantage by having higher online times and/or being able to apply more manpower toward game activities such as leveling or gaining experience. In some MMOs this is not seen as cheating although others such as Maplestory, Blizzard Entertainment's World of Warcraft or Jagex's Runescape specifically forbid it.

Spinbots

Spinbots alter the game so that play occurs on a rotated screen -- upside down, sideways, diagonal, etc. Spinbots that cause the player to have more difficulty playing are rare; spinbots that present the user a normal view are more common but may still cause the player in-game model to spin extremely fast, disrupting the character model's hitbox and distracting other players.

Disconnecting

In games where wins and losses are recorded on a player's account, a player may disconnect when they have lost in order to prevent the loss from being recorded. A similar phenomenon is when a server operator boots an opponent or players who they do not support. Disconnecting is considered immoral, as the opponent may not have their "win" recorded. Some games implement a disconnection penalty, usually by recording the disconnect as a loss, or a loss of experience points as in Halo 3.

Stacking

Stacking involves altering game settings or team lineups to give one or more teams an unfair advantage over the other(s). One example includes pitting a team composed of skilled or known players against a team with members of lesser skill. Although a valid and accepted tactic and practice—especially in real-life sports[3]—stacking upsets less-skilled players who feel that they aren't being given a fair chance. Less ethical rigging involves weighting the game by providing a player or team an advantage by outfitting them with better (or more familiar) weapons or equipment or creating a play field that caters to a certain player, team and/or playing style.

Farming

In games where achievements are available via defeating a number of a particular class, players may arrange to win/lose against one another in order to obtain the achievements without having to play the game linearly. This is also known as stat-padding or swapping, but is not considered cheating by most.

cheater games forum

Monday, October 13, 2008

PFConfig v1.0.220

Total Setup of your router's Port Forwards!

Automatically forward your ports with PFConfig. If you are trying to run an application that requires ports to be forwarded, you can skip the work and just use PFConfig. With PFConfig, you don't have to learn how to setup your router. It's so easy to use, you just pick the program from a list and say 'Update Router'.

Features:


* Configures your routers port forwarding section for you.

* Increases security by allowing you to forward ports when you need them, and stop forwarding them when you do not.

* Optimizes the use of your routers available memory.

* Elimitates the hassle of logging in to your router everytime you want to make a change to your port forwarding setup.


Automatically Forwards Ports
When you want to run an application that requires a port to be forwarded, with PFConfig all you have to do is select that application from the list and choose "Forward this App". If you want to forward different apps to different computers, you simply put the other computers address in and we take care of the rest.

Optimized Use of Ports
When you use PFConfig, our advanced optimization routines will actually combine multiple applications into less entries in your router. You can frequently forward more applications than your router would otherwise support.

Huge Increases in Security
When you use PFConfig, it is incredibly easy to forward ports as you need them, and turn them off when you do not. Turning off your forwarded ports when not in use is a huge increase in security.

Never Login to your Router Again
With PFConfig, you do not have to log in to your router to add and remove forwarded ports. Simply tell PFConfig which ports to forward and where to forward them, and the rest is taken care of for you. You can pretty much forget your routers password.

The Tour :
http://portforward.com/store/pfconfigtour.htm

What it doesn't do


* Please note that PFConfig does not configure your firewall, it only configures your port fowarding. You may still have to configure your firewall to get some applications to work.

* PFConfig only works with supported routers. We are adding routers to this list every day, so if your router is not supported yet, please check back soon.

* More info here:
https://portforward.com/store/pfconfig.cgi

From : www.astatalk.com

Squid

What is Squid?

Squid is a fully-featured HTTP/1.0 proxy which is almost (but not quite - we're getting there!) HTTP/1.1 compliant. Squid offers a rich access control, authorization and logging environment to develop web proxy and content serving applications.

Where did Squid come from?

Squid is based on the Harvest Cache Daemon developed in the early 1990's. It was one of two forks from the codebase after the Harvest project ran to completion. (The other fork being what became Netapp's Netcache.)

The Squid project was funded by an NSF grant (NCR-9796082) which covered research into caching technologies. The ircache funding ran out a few years later and the Squid project continued through volunteer donations and the occasional commercial investment.

Squid is currently being developed by a handful of individuals donating their time and effort to building current and next generation content caching and delivery technologies. An ever-growing number of companies use Squid to save on their internet web traffic, improve performance, deliver faster browsing to their end-clients and provide static, dynamic and streaming content to millions of internet users worldwide.

Who uses Squid today?

A good question! Many of you are using Squid without even knowing it! Some companies have embedded Squid in their home or office firewall devices, others use Squid in large-scale web proxy installations to speed up broadband and dialup internet access. Squid is being increasingly used in content delivery architectures to deliver static and streaming video/audio to internet users worldwide.

Why should I deploy Squid?

(Or.. "Why should I bother with web caching? Can't I just buy more bandwidth?")

The developers of the HTTP protocol identified early on that there was going to be exponential growth in content and, concerned with distribution mechanisms, added powerful caching primitives.

These primitives allow content developers and distributors to hint to servers and end-user applications how content should be validated, revalidated and cached. This had the effect of dramatically reducing the amount of bandwidth required to serve content and improved user response times.

Squid is one of the projects which grew out of the initial content distribution and caching work in the mid-90s. It has grown to include extra features such as powerful access control, authorization, logging, content distribution/replication, traffic management and shaping and more. It has many, many work-arounds, new and old, to deal with incomplete and incorrect HTTP implementations.

For ISPs: Save on bandwidth, improve user experience

Squid allows Internet Providers to save on their bandwidth through content caching. Cached content means data is served locally and users will see this through faster download speeds with frequently-used content.

A well-tuned proxy server (even without caching!) can improve user speeds purely by optimising TCP flows. Its easy to tune servers to deal with the wide variety of latencies found on the internet - something that desktop environments just aren't tuned for.

Squid allows ISPs to avoid needing to spend large amounts of money on upgrading core equipment and transit links to cope with ever-demanding content growth. It also allows ISPs to prioritise and control certain web content types where dictacted by technical or economic reasons.

For Websites: Scale your application without massive investment in hardware and development time

Squid is one of the oldest content accelerators, used by thousands of websites around the world to ease the load on their servers. Frequently-seen content is cached by Squid and served to the end-client with only a fraction of the application server load needed normally. Setting up an accelerator in front of an existing website is almost always a quick and simple task with immediate benefits.

For Content Delivery Providers: distribute your content worldwide

Squid makes it easy for content distributors and streaming media developers to distribute content worldwide. CDN providers can buy cheap PC hardware running Squid and deploy in strategic locations around the internet to serve enormous amounts of data cheaply and efficiently.

A large number of companies have deployed servers running Squid in the past in exactly this manner.

From : http://www.squid-cache.org/

How to identify SQL Server performance issues, by analyzing Profiler output?

It is always better to be proactive than reactive, when it comes to identifying and eliminating SQL Server performance issues. In this article, I am going to explain the method I follow, to identify performance issues in my database applications, before those applications go into production environment.
Once the application is completely built and tested, I will conduct something called a "Load test" or "Stress test". There are several ways (or tools) in which one can stress test a database application. I use Mercury Loadrunner, though it is a very expensive tool to work with. Cheaper alternatives include, tools like "Database Hammer" from SQL Server 2000 Resource Kit. Idea behind stress testing is to be able to simulate real production loads, user connections and see how the application and database behave under extreme stress. This enables us to fix potential performance issues, blocking and deadlocking issues, before we put the application in production. So, it is very important to have a separate load testing environment, which mirrors your production environments in terms of hardware and software resources (like processing power, available memory, disk layouts and RAID arrays, versions and editions of software), network bandwidth etc. Without this, your load test may not be a valid one.
In this article, I will explain how to use Profiler to identify SQL Server specific performance issues. At the end of this article, you'll find books related to SQL Server performance tuning, and links to load testing tools.
Idea here, is to run Profiler during the load test, and analyze the Profiler output at the end of the load test, to identify the longest running queries, most CPU intensive queries, stored procedure recompilations, errors like deadlocks etc. For this purpose, I use a specific trace template, that captures all the data we need for this analysis (This trace template is only for use with SQL Server 2000 and is not compatible with SQL Server 7.0).


Click here to download the trace definition file LoadTest.tdf

This trace template captures the following event classes:

Database Database File Auto Grow
Log File Auto Grow
Errors and Warnings ErrorLog
EventLog
Exception
Locks Lock:Deadlock
Lock:Deadlock Chain
Stored ProceduresRPC:Completed
RPC:Starting
SP:CacheHit
SP:CacheInsert
SP:CacheMiss
SP:CacheRemove
SP:Completed
SP:ExecContextHit
SP:Recompile
SP:Starting
SP:StmtCompleted
T-SQLSQL:BatchCompleted
SQL:BatchStarting
SQL:StmtCompleted


This trace template also captures the following data columns:

EventClass
ApplicationName
CPU
SPID
DatabaseName
Duration
EndTime
Error
EventSubClass
HostName
IntegerData
LoginName
NTUserName
ObjectName
StartTime
TextData

Download the trace template LoadTest.tdf, from the above link, save it to a location on your PC, say C:\LoadTest. Follow these steps to create and run a new trace based on this definition file.
  1. Open Profiler.
  2. From the File menu, select New > Trace...
  3. In the 'Connect to SQL Server' dialog box, Connect to the SQL Server that you are going to be tracing, by providing the server name and login information (Make sure you connect as a sysadmin).
  4. In the General tab of the 'Trace Properties' dialog box click on the folder icon against the 'Template file name:' text box. Select the trace template file that you've just downloaded.
  5. Now we need to save the Profiler output to a table, for later analysis. So, Check the check box against 'Save to table:' check box. In the 'Connect to SQL Server' dialog box, specify an SQL Server name (and login, password), on which you'd like to store the Profiler output. In the 'Destination Table' dialog box, select a database and table name. Click OK. It's a good idea to save the Profiler output to a different SQL Server, than the one on which we are conducting load test.
  6. Check 'Enable trace stop time:' and select a date and time at which you want the trace to stop itself. (I typically conduct the load test for about 2 hours).
  7. Click "Run" to start the trace.

It's also a good idea to run Profiler on a client machine, instead of, on the SQL Server itself. On the client machine, make sure you have enough space on the system drive.

As soon as the Profiler starts tracing, kick start your load test process as well. At the end of the load test, you'll see that Profiler has logged massive amounts of information into the table, that you've selected in Step 4. Now we need to analyze this information and find out what are the long running queries and stored procedures, what are the most CPU intensive stored procedures, which stored procedures are being recompiled too many times, are there any errors that are being generated due to stress etc.

I wrote some stored procedures, that can be used to analyze the Profiler output and generate various reports based on your requirements. I'll list some of those stored procedures here and provide you the code. Once you get the idea, you should be able to write your own stored procedures to get the information you need. You need to create these stored procedures, in the same database as the Profiler output table. Note that, in all these stored procedures, I used dynamic SQL, so that we can reuse the stored procedures against multiple 'Profiler output' tables, by passing the name of the table as an input parameter. Also, dynamic SQL gives more flexibility when it comes to reporting. But it has its own disadvantages, and should be used with caution in production code.

Note: Consider creating a clustered index on the EventClass column of the Profiler output table, before you start executing the following procedures. This will improve the performance of the below mentioned stored procedures. For more information on EventClass, its values, and descriptions, and for programmatically extending the scope of the following stored procedures, download the Events table from here!

Downloadable stored procedure Usage information
LongRunningProcs This procedure identifies the top n long running stored procedures and queries.

Following is the description of the input parameters:

@Table - Name of the 'Profiler output' table. Mandatory.

@Top - Number of long running queries that'll be returned. By default top 150 most long running queries will be returned. Optional.

@Owner - Name of the 'Profiler output' table's owner. Default is 'dbo'. Optional.

@LoginName - Returns only those queries, executed by this login name. Optional.

@HostName - Returns only those queries executed from this workstation. Optional.

@ApplicationName - Returns only those queries executed by this application. Optional.

@NTUserName - Returns only those queries executed by this Windows account. Optional.

@StartTime - Returns only those queries executed after this date and time. Optional.

@EndTime - Returns only those queries executed before this date and time. Optional.

@MinDuration - Returns only those queries whose duration (in Seconds) is greater than or equal to @MinDuration. Optional.

@MaxDuration - Returns only those queries whose duration (in Seconds) is less than or equal to @MaxDuration. Optional.

@Debug - If 1 is passed then the dynamic SQL statement is printed before execution. If 0, no debug information will be printed. 0 is the default. Optional.

Example:

--To see the top 10 long running procedures executed by the login 'BusinessObjects'
EXEC dbo.LongRunningProcs @Table = 'ProfilerData', @Top = 10, @LoginName = 'BusinessObjects'
GO

--To see the top 150 long running procedures executed between '2002/11/24 14:30' and '2002/11/24 15:00'

EXEC dbo.LongRunningProcs @Table = 'ProfilerData', @StartTime = '2002/11/24 22:40', @EndTime = '2002/11/24 22:47'
GO
CPUIntensive This procedure identifies the top n long running stored procedures and queries.

Following is the description of the input parameters:

@Table - Name of the 'Profiler output' table. Mandatory.

@Top - Number of most CPU intensive queries that'll be returned. By default top 150 most CPU intensive queries will be returned. Optional.

@Owner - Name of the 'Profiler output' table's owner. Default is 'dbo'. Optional.

@LoginName - Returns only those queries, executed by this login name. Optional.

@HostName - Returns only those queries executed from this workstation. Optional.

@ApplicationName - Returns only those queries executed by this application. Optional.

@NTUserName - Returns only those queries executed by this Windows account. Optional.

@StartTime - Returns only those queries executed after this date and time. Optional.

@EndTime - Returns only those queries executed before this date and time. Optional.

@MinDuration - Returns only those queries whose CPU utilization (in Seconds) is greater than or equal to @MinDuration. Optional.

@MaxDuration - Returns only those queries whose CPU utilization (in Seconds) is less than or equal to @MaxDuration. Optional.

@Debug - If 1 is passed then the dynamic SQL statement is printed before execution. If 0, no debug information will be printed. 0 is the default. Optional.

Example:

--To see the top 150 most CPU intensive queries executed by the application 'MTS Objects'
EXEC dbo.CPUIntensive @Table = 'ProfilerData', @ApplicationName = 'MTS Objects'
GO

--To see the top 150 CPU intensive queries executed since '2002/11/24 11:30'
EXEC dbo.CPUIntensive @Table = 'ProfilerData', @StartTime = '2002/11/24 11:30'
GO
ProcedureCacheUsage This procedure displays the procedure cache usage information.

Following is the description of the input parameters:

@Table - Name of the 'Profiler output' table. Mandatory.

@Owner - Name of the 'Profiler output' table's owner. Default is 'dbo'. Optional.

@LoginName - Considers only those procedures, executed by this login name. Optional.

@HostName - Considers only those procedures, executed from this workstation. Optional.

@ApplicationName - Considers only those procedures, executed by this application. Optional.

@NTUserName - Considers only those procedures, executed by this Windows account. Optional.

@StartTime - Considers only those procedures, executed after this date and time. Optional.

@EndTime - Considers only those procedures, executed before this date and time. Optional.

@Debug - If 1 is passed then the dynamic SQL statement is printed before execution. If 0, no debug information will be printed. 0 is the default. Optional.

Example:

--To see the procedure cache usage statistics:
EXEC dbo.ProcedureCacheUsage 'ProfilerData'
GO
ShowErrors This procedure displays the error messages raised during the stress test.

Following is the description of the input parameters:

@Table - Name of the 'Profiler output' table. Mandatory.

@Top - Number of error messages that'll be returned. By default top 150 error messages will be returned. Optional. @Owner - Name of the 'Profiler output' table's owner. Default is 'dbo'. Optional.

@LoginName - Displays only those errors, generated by this login. Optional.

@HostName - Displays only those errors, generated by this workstation. Optional.

@ApplicationName - Displays only those errors, generated by this application. Optional.

@NTUserName - Displays only those errors, generated by this Windows account. Optional.

@StartTime - Displays only those errors, generated after this date and time. Optional.

@Debug - If 1 is passed then the dynamic SQL statement is printed before execution. If 0, no debug information will be printed. 0 is the default. Optional.

Example:

--To see all the errors generated by the login 'BusinessObjects'
EXEC dbo.ShowErrors @Table = 'ProfilerData', @LoginName = 'BusinessObjects'


Once you identify the bad queries and stored procedures, using the above methods, you need to work on those individual procedures/queries to address the problems. Some of the tools that will help in fine tuning performance include: Index Tuning Wizard, Graphical execution plan option in Query Analyzer.

From : http://vyaskn.tripod.com/

Thursday, October 9, 2008

About PostgreSQL

About

PostgreSQL is a powerful, open source relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL92 and SQL99 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.

An enterprise class database, PostgreSQL boasts sophisticated features such as Multi-Version Concurrency Control (MVCC), point in time recovery, tablespaces, asynchronous replication, nested transactions (savepoints), online/hot backups, a sophisticated query planner/optimizer, and write ahead logging for fault tolerance. It supports international character sets, multibyte character encodings, Unicode, and it is locale-aware for sorting, case-sensitivity, and formatting. It is highly scalable both in the sheer quantity of data it can manage and in the number of concurrent users it can accommodate. There are active PostgreSQL systems in production environments that manage in excess of 4 terabytes of data. Some general PostgreSQL limits are included in the table below.

LimitValue
Maximum Database SizeUnlimited
Maximum Table Size32 TB
Maximum Row Size1.6 TB
Maximum Field Size1 GB
Maximum Rows per TableUnlimited
Maximum Columns per Table250 - 1600 depending on column types
Maximum Indexes per TableUnlimited

PostgreSQL has won praise from its users and industry recognition, including the Linux New Media Award for Best Database System and five time winner of the The Linux Journal Editors' Choice Award for best DBMS.

Featureful and Standards Compliant

PostgreSQL prides itself in standards compliance. Its SQL implementation strongly conforms to the ANSI-SQL 92/99 standards. It has full support for subqueries (including subselects in the FROM clause), read-committed and serializable transaction isolation levels. And while PostgreSQL has a fully relational system catalog which itself supports multiple schemas per database, its catalog is also accessible through the Information Schema as defined in the SQL standard.

Data integrity features include (compound) primary keys, foreign keys with restricting and cascading updates/deletes, check constraints, unique constraints, and not null constraints.

It also has a host of extensions and advanced features. Among the conveniences are auto-increment columns through sequences, and LIMIT/OFFSET allowing the return of partial result sets. PostgreSQL supports compound, unique, partial, and functional indexes which can use any of its B-tree, R-tree, hash, or GiST storage methods.

GiST (Generalized Search Tree) indexing is an advanced system which brings together a wide array of different sorting and searching algorithms including B-tree, B+-tree, R-tree, partial sum trees, ranked B+-trees and many others. It also provides an interface which allows both the creation of custom data types as well as extensible query methods with which to search them. Thus, GiST offers the flexibility to specify what you store, how you store it, and the ability to define new ways to search through it --- ways that far exceed those offered by standard B-tree, R-tree and other generalized search algorithms.

GiST serves as a foundation for many public projects that use PostgreSQL such as OpenFTS and PostGIS. OpenFTS (Open Source Full Text Search engine) provides online indexing of data and relevance ranking for database searching. PostGIS is a project which adds support for geographic objects in PostgreSQL, allowing it to be used as a spatial database for geographic information systems (GIS), much like ESRI's SDE or Oracle's Spatial extension.

Other advanced features include table inheritance, a rules systems, and database events. Table inheritance puts an object oriented slant on table creation, allowing database designers to derive new tables from other tables, treating them as base classes. Even better, PostgreSQL supports both single and multiple inheritance in this manner.

The rules system, also called the query rewrite system, allows the database designer to create rules which identify specific operations for a given table or view, and dynamically transform them into alternate operations when they are processed.

The events system is an interprocess communication system in which messages and events can be transmitted between clients using the LISTEN and NOTIFY commands, allowing both simple peer to peer communication and advanced coordination on database events. Since notifications can be issued from triggers and stored procedures, PostgreSQL clients can monitor database events such as table updates, inserts, or deletes as they happen.

Highly Customizable

PostgreSQL runs stored procedures in more than a dozen programming languages, including Java, Perl, Python, Ruby, Tcl, C/C++, and its own PL/pgSQL, which is similar to Oracle's PL/SQL. Included with its standard function library are hundreds of built-in functions that range from basic math and string operations to cryptography and Oracle compatibility. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. Similarly, PostgreSQL includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. As a result, a host of advanced data types have been created that range from geometric and spatial primitives to network addresses to even ISBN/ISSN (International Standard Book Number/International Standard Serial Number) data types, all of which can be optionally added to the system.

Just as there are many procedure languages supported by PostgreSQL, there are also many library interfaces as well, allowing various languages both compiled and interpreted to interface with PostgreSQL. There are interfaces for Java (JDBC), ODBC, Perl, Python, Ruby, C, C++, PHP, Lisp, Scheme, and Qt just to name a few.

Best of all, PostgreSQL's source code is available under the most liberal open source license: the BSD license. This license gives you the freedom to use, modify and distribute PostgreSQL in any form you like, open or closed source. Any modifications, enhancements, or changes you make are yours to do with as you please. As such, PostgreSQL is not only a powerful database system capable of running the enterprise, it is a development platform upon which to develop in-house, web, or commercial software products that require a capable RDBMS.

From : http://www.postgresql.org/

About Ping

Ping is a computer network tool used to test whether a particular host is reachable across an IP network; it is also used to self test the network interface card of the computer, or as a speed test. It works by sending ICMP “echo request” packets to the target host and listening for ICMP “echo response” replies. Ping estimates the round-trip time, generally in milliseconds, records any packet loss, and prints a statistical summary when finished.

The word ping is also frequently used as a verb or noun, where it can refer directly to the round-trip time, the act of running a ping program or measuring the round-trip time.

From : http://en.wikipedia.org/


Ping is a basic Internet program that allows a user to verify that a particular IP address exists and can accept requests. The verb ping means the act of using the ping utility or command. Ping is used diagnostically to ensure that a host computer you are trying to reach is actually operating. If, for example, a user can't ping a host, then the user will be unable to use the File Transfer Protocol (FTP) to send files to that host. Ping can also be used with a host that is operating to see how long it takes to get a response back. Using ping, you can learn the number form of the IP address from the symbolic domain name (see "Tip").

Loosely, ping means "to get the attention of" or "to check for the presence of" another party online. Ping operates by sending a packet to a designated address and waiting for a response. The computer acronym (for Packet Internet or Inter-Network Groper) was contrived to match the submariners' term for the sound of a returned sonar pulse.

Ping can also refer to the process of sending a message to all the members of a mailing list requesting an ACK (acknowledgement code). This is done before sending e-mail in order to confirm that all of the addresses are reachable.

From : http://searchnetworking.techtarget.com/

The Story of the PING Program

Yes, it's true! I'm the author of ping for UNIX. Ping is a little thousand-line hack that I wrote in an evening which practically everyone seems to know about. :-)

I named it after the sound that a sonar makes, inspired by the whole principle of echo-location. In college I'd done a lot of modeling of sonar and radar systems, so the "Cyberspace" analogy seemed very apt. It's exactly the same paradigm applied to a new problem domain: ping uses timed IP/ICMP ECHO_REQUEST and ECHO_REPLY packets to probe the "distance" to the target machine.

My original impetus for writing PING for 4.2a BSD UNIX came from an offhand remark in July 1983 by Dr. Dave Mills while we were attending a DARPA meeting in Norway, in which he described some work that he had done on his "Fuzzball" LSI-11 systems to measure path latency using timed ICMP Echo packets.

In December of 1983 I encountered some odd behavior of the IP network at BRL. Recalling Dr. Mills' comments, I quickly coded up the PING program, which revolved around opening an ICMP style SOCK_RAW AF_INET Berkeley-style socket(). The code compiled just fine, but it didn't work -- there was no kernel support for raw ICMP sockets! Incensed, I coded up the kernel support and had everything working well before sunrise. Not surprisingly, Chuck Kennedy (aka "Kermit") had found and fixed the network hardware before I was able to launch my very first "ping" packet. But I've used it a few times since then. *grin* If I'd known then that it would be my most famous accomplishment in life, I might have worked on it another day or two and added some more options.

The folks at Berkeley eagerly took back my kernel modifications and the PING source code, and it's been a standard part of Berkeley UNIX ever since. Since it's free, it has been ported to many systems since then, including Microsoft Windows95 and WindowsNT. You can identify it by the distinctive messages that it prints, which look like this:

PING vapor.arl.army.mil (128.63.240.80): 56 data bytes
64 bytes from 128.63.240.80: icmp_seq=0 time=16 ms
64 bytes from 128.63.240.80: icmp_seq=1 time=9 ms
64 bytes from 128.63.240.80: icmp_seq=2 time=9 ms
64 bytes from 128.63.240.80: icmp_seq=3 time=8 ms
64 bytes from 128.63.240.80: icmp_seq=4 time=8 ms
^C
----vapor.arl.army.mil PING Statistics----
5 packets transmitted, 5 packets received, 0% packet loss
round-trip (ms) min/avg/max = 8/10/16

In 1993, ten years after I wrote PING, the USENIX association presented me with a handsome scroll, pronouncing me a Joint recipient of The USENIX Association 1993 Lifetime Achievement Award presented to the Computer Systems Research Group, University of California at Berkeley 1979-1993. ``Presented to honor profound intellectual achievement and unparalleled service to our Community. At the behest of CSRG principals we hereby recognize the following individuals and organizations as CSRG participants, contributors and supporters.'' Wow!

Want to see the source code ? (40k)

From my point of view PING is not an acronym standing for Packet InterNet Grouper, it's a sonar analogy. However, I've heard second-hand that Dave Mills offered this expansion of the name, so perhaps we're both right. Sheesh, and I thought the government was bad about expanding acronyms! :-)

Phil Dykstra added ICMP Record Route support to PING, but in those early days few routers processed them, making this feature almost useless. The limitation on the number of hops that could be recorded in the IP header precluded this from measuring very long paths.

I was insanely jealous when Van Jacobson of LBL used my kernel ICMP support to write TRACEROUTE, by realizing that he could get ICMP Time-to-Live Exceeded messages when pinging by modulating the IP time to life (TTL) field. I wish I had thought of that! :-) Of course, the real traceroute uses UDP datagrams because routers aren't supposed to generate ICMP error messages for ICMP messages.

The best ping story I've ever heard was told to me at a USENIX conference, where a network administrator with an intermittent Ethernet had linked the ping program to his vocoder program, in essence writing:

ping goodhost | sed -e 's/.*/ping/' | vocoder
He wired the vocoder's output into his office stereo and turned up the volume as loud as he could stand. The computer sat there shouting "Ping, ping, ping..." once a second, and he wandered through the building wiggling Ethernet connectors until the sound stopped. And that's how he found the intermittent failure.

From : http://ftp.arl.mil/~mike/ping.html

Fitur RouterOS Mikrotik

Manage Protokol TCP/IP:
  • Firewall and NAT - stateful packet filtering; Peer-to-Peer protocol filtering; source and destination NAT; classification by source MAC, IP addresses, ports, protocols, protocol options, interfaces, internal marks, content, matching frequency
  • Routing - Static routing; Equal cost multi-path routing; Policy based routing (classification by source and destination addresses and/or by firewall mark); RIP v1 / v2, OSPF v2, BGP v4
  • Data Rate Management - per IP / protocol / subnet / port / firewall mark; HTB, PCQ, RED, SFQ, byte limited queue, packet limited queue; hierarchical limitation, CIR, MIR, contention ratios, dynamic client rate equalizing (PCQ)
  • HotSpot - HotSpot Gateway with RADIUS authentication/accounting; data rate limitation; traffic quota; real-time status information; walled-garden; customized HTML login pages; iPass support; SSL secure authentication
  • Point-to-Point tunneling protocols - PPTP, PPPoE and L2TP Access Concentrators and clients; PAP, CHAP, MSCHAPv1 and MSCHAPv2 authentication protocols; RADIUS authentication and accounting; MPPE encryption; compression for PPPoE; data rate limitation; PPPoE dial on demand
  • Simple tunnels - IPIP tunnels, EoIP (Ethernet over IP)
  • IPsec - IP security AH and ESP protocols; Diffie-Hellman groups 1,2,5; MD5 and SHA1 hashing algorithms; DES, 3DES, AES-128, AES-192, AES-256 encryption algorithms; Perfect Forwarding Secresy (PFS) groups 1,2,5
  • Web proxy - FTP, HTTP and HTTPS caching proxy server; transparent HTTP caching proxy; SOCKS protocol support; support for caching on a separate drive; access control lists; caching lists; parent proxy support
  • Caching DNS client - name resolving for local use; Dynamic DNS Client; local DNS cache with static entries
  • DHCP - DHCP server per interface; DHCP relay; DHCP client; multiple DHCP networks; static and dynamic DHCP leases
  • Universal Client - Transparent address translation not depending on the client's setup
  • VRRP - VRRP protocol for high availability
  • UPnP - Universal Plug-and-Play support
  • NTP - Network Time Protocol server and client; synchronization with GPS system
  • Monitoring/Accounting - IP traffic accounting, firewall actions logging
  • SNMP - read-only access
  • M3P - MikroTik Packet Packer Protocol for Wireless links and Ethernet
  • MNDP - MikroTik Neighbor Discovery Protocol; also supports Cisco Discovery Protocol (CDP)
  • Tools - ping; traceroute; bandwidth test; ping flood; telnet; SSH; packet sniffer
Layer 2 connectivity
  • Wireless - IEEE802.11a/b/g wireless client and Access Point; Wireless Distribution System (WDS) support; virtual AP; 40 and 104 bit WEP; access control list; authentication on RADIUS server; roaming (for wireless client); Access Point bridging
  • Bridge - spanning tree protocol; multiple bridge interfaces; bridge firewalling
  • VLAN - IEEE802.1q Virtual LAN support on Ethernet and WLAN links; multiple VLANs; VLAN bridging
  • Synchronous - V.35, V.24, E1/T1, X.21, DS3 (T3) media types; sync-PPP, Cisco HDLC, Frame Relay line protocols; ANSI-617d (ANDI or annex D) and Q933a (CCITT or annex A) Frame Relay LMI types
  • Asynchronous - serial PPP dial-in / dial-out; PAP, CHAP, MSCHAPv1 and MSCHAPv2 authentication protocols; RADIUS authentication and accounting; onboard serial ports; modem pool with up to 128 ports; dial on demand
  • ISDN - ISDN dial-in / dial-out; PAP, CHAP, MSCHAPv1 and MSCHAPv2 authentication protocols; RADIUS authentication and accounting; 128K bundle support; Cisco HDLC, x75i, x75ui, x75bui line protocols; dial on demand
  • SDSL - Single-line DSL support; line termination and network termination modes
Hardware requirements
  • CPU and motherboard - advanced 4th generation (core frequency 100MHz or more), 5th generation (Intel Pentium, Cyrix 6X86, AMD K5 or comparable) or newer uniprocessor Intel IA-32 (i386) compatible (multiple processors are not supported)
  • RAM - minimum 48 MB, maximum 1 GB; 64 MB or more recommended
  • Hard Drive/Flash - standard ATA interface controller and drive (SCSI and USB controllers and drives are not supported; RAID controllers that require additional drivers are not supported) with minimum of 64 MB space
Hardware needed for installation time only

Depending on installation method chosen the router must have the following hardware:
  • Floppy-based installation - standard AT floppy controller and 3.5'' disk drive connected as the first floppy disk drive (A); AT, PS/2 or USB keyboard; VGA-compatible video controller card and monitor
  • CD-based installation - standard ATA/ATAPI interface controller and CD drive supporting "El Torito" bootable CDs (you might need also to check if the router's BIOS supports booting from this type of media); AT, PS/2 or USB keyboard; VGA-compatible video controller card and monitor
  • Floppy-based network installation - standard AT floppy controller and 3.5'' disk drive connected as the first floppy disk drive (A); PCI Ethernet network interface card supported by MikroTik RouterOS (see the Device Driver List for the list)
  • Full network-based installation - PCI Ethernet network interface card supported by MikroTik RouterOS (see the Device Driver List for the list) with PXE or EtherBoot extension booting ROM (you might need also to check if the router's BIOS supports booting from network)
Configuration possibilities

RouterOS provides powerful command-line configuration interface. You can also manage the router through WinBox - the easy-to-use remote configuration GUI for Windows -, which provides all the benefits of the command-line interface, without the actual "command-line", which may scare novice users. Major features:
  • Clean and consistent user interface
  • Runtime configuration and monitoring
  • Multiple connections
  • User policies
  • Action history, undo/redo actions
  • safe mode operation
  • Scripts can be scheduled for executing at certain times, periodically, or on events. All command-line commands are supported in scripts
When router is not configured, there are only two ways to configure it:
  • Local terminal console - AT, PS/2 or USB keyboard and VGA-compatible video controller card with monitor
  • Serial console - First RS232 asynchronous serial port (usually, onboard port marked as COM1), which is by default set to 9600bit/s, 8 data bits, 1 stop bit, no parity
After the router is configured, it may be managed through the following interfaces:
  • Local teminal console - AT, PS/2 or USB keyboard and VGA-compatible video controller card with monitor
  • Serial console - any (you may choose any one; the first, also known as COM1, is used by default) RS232 asynchronous serial port, which is by default set to 9600bit/s, 8 data bits, 1 stop bit, no parity
  • Telnet - telnet server is running on 23 TCP port by default
  • SSH - SSH (secure shell) server is running on 22 TCP port by default (available only if security package is installed)
  • MAC Telnet - MikroTik MAC Telnet potocol server is by default enabled on all Ethernet-like interfaces
  • Winbox - Winbox is a RouterOS remote administration GUI for Windows, that use 3986 TCP port (or 3987 if security package is installed)
From :http://www.mikrotik.co.id/

Wednesday, October 8, 2008

REDHAT

What is Red Hat ?

Red Hat® Network is the system management platform from Red Hat designed to provide complete life cycle management. It provides simple tools to efficiently manage systems on your network, including provisioning new systems, managing their updates and configuration changes, monitoring system performance, and eventually re-deploying the systems for a new purpose.

Why use Red Hat Network?

Red Hat Network is a tool that lowers the overall TCO costs of Linux systems. With Red Hat Network, you are able to improve productivity and lower costs in a way that makes Linux deployable, scalable, manageable, and consistent:

Deployable
Provision new systems remotely according to profiles you create. In just a few minutes, you can deploy hundreds of fully configured Red Hat® Enterprise Linux® systems.
Scalable
Red Hat Network scales to manage 2 or 20,000 systems. Managing groups of systems is just as easy as managing a single system.
Manageable
Red Hat Network uses an intuitive web-based interface to perform management actions. Senior-level Linux administrators aren't required for routine management tasks, freeing up their time for higher value projects.
Consistent
Red Hat Network makes it easy to manage systems consistently, so you know you have the same patches, configurations, monitoring probes or applications on all the systems in a group. Its repeatable processes enables measurability, uniformity, and compliance. And, Red Hat Network can be used as a single tool for managing both Red Hat Enterprise Linux and Solaris environments.


for download just copy then paste the url
CD 1/ 5

http://rapidshare.com/files/21391160/RHE5-CD1.part1.rar
http://rapidshare.com/files/21394086/RHE5-CD1.part2.rar
http://rapidshare.com/files/21396822/RHE5-CD1.part3.rar
http://rapidshare.com/files/21399728/RHE5-CD1.part4.rar
http://rapidshare.com/files/21402228/RHE5-CD1.part5.rar
http://rapidshare.com/files/21404588/RHE5-CD1.part6.rar
http://rapidshare.com/files/21406877/RHE5-CD1.part7.rar
http://rapidshare.com/files/21388211/RHE5-CD1.part8.rar

--------------------------------------------------------------------------------


CD 2/ 5

http://rapidshare.com/files/21435549/RHE5-CD2.part1.rar
http://rapidshare.com/files/21441051/RHE5-CD2.part2.rar
http://rapidshare.com/files/21602536/RHE5-CD2.part3.rar
http://rapidshare.com/files/21445771/RHE5-CD2.part4.rar
http://rapidshare.com/files/21448432/RHE5-CD2.part5.rar
http://rapidshare.com/files/21450963/RHE5-CD2.part6.rar
http://rapidshare.com/files/21453638/RHE5-CD2.part7.rar
http://rapidshare.com/files/21433357/RHE5-CD2.part8.rar

--------------------------------------------------------------------------------


CD 3/ 5

http://rapidshare.com/files/21457773/RHE5-CD3.part1.rar
http://rapidshare.com/files/21461033/RHE5-CD3.part2.rar
http://rapidshare.com/files/21464064/RHE5-CD3.part3.rar
http://rapidshare.com/files/21467083/RHE5-CD3.part4.rar
http://rapidshare.com/files/21470056/RHE5-CD3.part5.rar
http://rapidshare.com/files/21473075/RHE5-CD3.part6.rar
http://rapidshare.com/files/21476041/RHE5-CD3.part7.rar
http://rapidshare.com/files/21455207/RHE5-CD3.part8.rar

--------------------------------------------------------------------------------


CD 4/ 5

http://rapidshare.com/files/21480726/RHE5-CD4.part1.rar
http://rapidshare.com/files/21483928/RHE5-CD4.part2.rar
http://rapidshare.com/files/21486788/RHE5-CD4.part3.rar
http://rapidshare.com/files/21489477/RHE5-CD4.part4.rar
http://rapidshare.com/files/21492409/RHE5-CD4.part5.rar
http://rapidshare.com/files/21585215/RHE5-CD4.part6.rar
http://rapidshare.com/files/21587253/RHE5-CD4.part7.rar
http://rapidshare.com/files/21477669/RHE5-CD4.part8.rar

--------------------------------------------------------------------------------


CD 5/ 5

http://rapidshare.com/files/21590920/RHE5-CD5.part1.rar
http://rapidshare.com/files/21598049/RHE5-CD5.part2.rar
http://rapidshare.com/files/21588740/RHE5-CD5.part3.rar

Tuesday, October 7, 2008

JOOMLA.....

What is Joomla?

Joomla is an award-winning content management system (CMS), which enables you to build Web sites and powerful online applications. Many aspects, including its ease-of-use and extensibility, have made Joomla the most popular Web site software available. Best of all, Joomla is an open source solution that is freely available to everyone.

What's a content management system (CMS)?

A content management system is software that keeps track of every piece of content on your Web site, much like your local public library keeps track of books and stores them. Content can be simple text, photos, music, video, documents, or just about anything you can think of. A major advantage of using a CMS is that it requires almost no technical skill or knowledge to manage. Since the CMS manages all your content, you don't have to.

What are some real world examples of what Joomla! can do?

Joomla is used all over the world to power Web sites of all shapes and sizes. For example:

* Corporate Web sites or portals
* Corporate intranets and extranets
* Online magazines, newspapers, and publications
* E-commerce and online reservations
* Government applications
* Small business Web sites
* Non-profit and organizational Web sites
* Community-based portals
* School and church Web sites
* Personal or family homepages



download joomla here