Search This Blog


Sunday, May 27, 2012

Java: Why JavaBeans?

Ok dears here is a short description about Java Beans as there is a confusion of using it as commented on blog "JavaBean enhancements in Java 7"

The official definition of a bean, as given in the JavaBeans specification, is:
"A bean is a reusable software component based on Sun's JavaBeans specification that can be manipulated visually in a builder tool."
Once you implement a bean, others can use it in a builder environment (such as NetBeans). Instead of having to write tedious code, they can simply drop your bean into a GUI form and customize it with dialog boxes.

I'd like to address a common confusion before going any further: The JavaBeans that we discuss in this chapter have little in common with Enterprise JavaBeans (EJB). Enterprise JavaBeans are server-side components with support for transactions, persistence, replication, and security. At a very basic level, they too are components that can be manipulated in builder tools. However, the Enterprise JavaBeans technology is quite a bit more complex than the "Standard Edition" JavaBeans technology.

That does not mean that standard JavaBeans components are limited to client-side programming. Web technologies such as JavaServer Faces (JSF) and JavaServer Pages (JSP) rely heavily on the JavaBeans component model.

Why Beans?
Programmers with experience in Visual Basic will immediately know why beans are so important. Programmers coming from an environment in which the tradition is to "roll your own" for everything often find it hard to believe that Visual Basic is one of the most successful examples of reusable object technology. For those who have never worked with Visual Basic, here, in a nutshell, is how you build a Visual Basic application:
  1. You build the interface by dropping components (called controls in Visual Basic) onto a form window.

  2. Through property inspectors, you set properties of the components such as height, color, or other behavior.

  3. The property inspectors also list the events to which components can react. Some events can be hooked up through dialog boxes. For other events, you write short snippets of event handling code.

I do not want to imply that Visual Basic is a good solution for every problem. It is clearly optimized for a particular kind of problem—UI-intensive Windows programs. The JavaBeans technology was invented to make Java technology competitive in this arena. It enables vendors to create Visual Basic-style development environments. These environments make it possible to build user interfaces with a minimum of programming.

Saturday, May 26, 2012

JDK7: JavaBean enhancements in Java 7

JavaBean is a way of building reusable components for Java applications. They are Java classes that follow certain naming conventions. There have been several JavaBean enhancements added in Java 7. Here we will focus on the java.beans.Expression class, which is useful in executing methods. The execute method has been added to facilitate this capability.

Getting ready
To use the Expression class to execute a method:
  1. Create an array of arguments for the method, if needed.

  2. Create an instance of the Expression class specifying the object that the method is to be executed against, the method name, and any arguments needed.

  3. Invoke the execute method against the expression.

  4. Use the getValue method to obtain the results of the method execution, if necessary.

How to do it...

1. Create a new console application. Create two classes: JavaBeanExample, which contains the main method and a Person class. The Person class contains a single field for a name along with constructors, a getter method, and a setter method:

2. In the main method of the JavaBeanExample class, we will create an instance of the Person class, and use the Expression class to execute its getName and setName methods:

3. Execute the application. Its output should appear as follows:

Name: Taman
Name: Mohamed
Name: Mohamed
getValue: Mohamed

How it works...
The Person class used a single field, name. The getName and setName methods were used from the main method, where a Person instance was created. The Expression class' constructor has four arguments. The first argument was not used in this example, but can be used to define a return value for the method executed. The second argument was the object that the method would be executed against. The third argument is a string containing the name of the method, and the last argument was an array containing the parameters used by the method.

In the first sequence, the setName method was executed using an argument of Mohamed. The output of the application shows that the name was initially Taman, but was changed to Mohamed after the execute method was executed.

In the second sequence, the getName method was executed. The getValue method returns the results of the execution of the method. The output shows that the getName method returned Mohamed.

There's more...
There have been other enhancements to the classes of the java.bean package. For example, the toString method has been overridden in the FeatureDescriptor and PropertyChangeEvent classes to provide a more meaningful description.

The Introspector class provides a way of learning about the properties, methods, and events of a Java Bean without using the Reflection API, which can be tedious. The class has added a getBeanInfo method, which uses the Inspector class' control flags to affect the BeanInfo object returned.

The Transient annotation has been added to control what is included. A true value for the attribute means that the annotated feature should be ignored.

A new constructor has been added to the XMLDecoder class that accepts an InputSource object. Also, the createHandler method has been added, which returns a DefaultHandler object. This handler is used to parse XML archives created by the XMLEncoder class.

A new constructor has been added to the XMLEncoder class. This permits writing out JavaBeans to an OutputStream using a specific charset with a specific indention.

Wednesday, May 16, 2012

Servers: Data Center Performance with Oracle SPARC T4 "Breaking the Rules"


As enterprise computing grows more demanding, older datacenters struggle to keep pace. But with budgets tight, space at a premium and power costs going through the roof, how can you increase data center performance while minimizing operational overhead?

Only Oracle SPARC T4 servers combine best-in-class performance with real world business value.
  1. World record performance for a wide range of critical enterprise applications.
  2. Faster development and deployment with pre-tuned and certified optimized solutions.
  3. No-cost virtualization for better system utilization and server consolidation.
  4. High compute densities that conserve power, cooling and floor space.
  5. Easy scalability through multiple form factors including blades.
  6. Powerful security features for adding protection without hurting performance.
  7. Seamless migration for your Oracle Solaris applications.
  8. Up to 5x faster performance for a wide range of critical enterprise applications.
  9. Complete integrated support with a single point of support accountability for the full Oracle stack.
Pressure on the data center has never been greater, with the business demanding faster performance, new applications and more capacity.

But as older SPARC servers start to struggle, how can you upgrade your data center affordably, securely and without disrupting the business?

Need more here is the ebook for the full story SPARC T4 Server

Tuesday, May 15, 2012

Database: When to Use CHECK Integrity Constraints (Oracle)

I have a table that contains lookups (constant data values in the system that doesn't change by the system when it up and running) data for the system, and this data in handled manually by developers, in database generation script.

I have some values should be in specific range or between certain values, dates not less than today and values inserted in columns based on other column value and so on.

There are two ways to add this business logic, in trigger (general one) or on table as check constraint (specific one).

I used the table check constraint choice, because this business validations is specific to table and is simple.

When to use CHECK constraints:
  1. Use CHECK constraints when you need to enforce integrity rules based on logical expressions, such as comparisons.
  2. Never use CHECK constraints when any of the other types of integrity constraints can provide the necessary checking (unique, primary, not null constraints).
Examples of CHECK constraints include the following:
  1. A CHECK constraint on employee salaries so that no salary value is greater than 10000.
  2. A CHECK constraint on department locations so that only the locations "CAIRO", "HURGADA", and "ALEXANDRIA" are allowed.
  3. A CHECK constraint on the salary and commissions columns to prevent the commission from being larger than the salary.
Restrictions on CHECK Constraints
A CHECK integrity constraint requires that a condition be true or unknown for every row of the table. If a statement causes the condition to evaluate to false, then the statement is rolled back. The condition of a CHECK constraint has the following limitations:
  1. The condition must be a boolean expression that can be evaluated using the values in the row being inserted or updated.
  2. The condition cannot contain sub queries or sequences.
  3. The condition cannot include the SYSDATE, UID, USER, or USERENV SQL functions.
  4. The condition cannot contain the pseudo columns LEVEL, PRIOR, or ROWNUM.
  5. The condition cannot contain a user-defined SQL function.
Designing CHECK Constraints
When using CHECK constraints, remember that a CHECK constraint is violated only if the condition evaluates to false; true and unknown values (such as comparisons with nulls) do not violate a check condition. Make sure that any CHECK constraint that you define is specific enough to enforce the rule.

For example, consider the following CHECK constraint:

At first glance, this rule may be interpreted as "do not allow a row in the employee table unless the employee's salary is greater than zero or the employee's commission is greater than or equal to zero." But if a row is inserted with a null salary, that row does not violate the CHECK constraint regardless of whether the commission value is valid, because the entire check condition is evaluated as unknown.

In this case, you can prevent such violations by placing NOT NULL integrity constraints on both the SAL and COMM columns.

A single column can have multiple CHECK constraints that reference the column in its definition. There is no limit to the number of CHECK constraints that can be defined that reference a column.

The order in which the constraints are evaluated is not defined, so be careful not to rely on the order or to define multiple constraints that conflict with each other.

According to the ANSI/ISO standard, a NOT NULL integrity constraint is an example of a CHECK integrity constraint, where the condition is the following:

Therefore, NOT NULL integrity constraints for a single column can, in practice, be written in two forms: using the NOT NULL constraint or a CHECK constraint. For ease of use, you should always choose to define NOT NULL integrity constraints, instead of CHECK constraints with the IS NOT NULL condition.

In the case where a composite key can allow only all nulls or all values, you must use a CHECK integrity constraint. For example, the following expression of a CHECK integrity constraint allows a key value in the composite key made up of columns C1 and C2 to contain either all nulls or all values:

Defining Integrity Constraints
1- With Create table command:
The following examples of CREATE TABLE statements show the definition of several integrity constraints:

2- With alter table command
You can also define integrity constraints using the constraint clause of the ALTER TABLE command. The syntax for creating a check constraint in an ALTER TABLE statement is:

Disable a Check Constraint:
The syntax for disabling a check constraint is:

Why Disable Constraints?
During day-to-day operations, constraints should always be enabled. In certain situations, temporarily disabling the integrity constraints of a table makes sense for performance reasons. For example:
  1. When loading large amounts of data into a table using SQL*Loader.
  2. When performing batch operations that make massive changes to a table (such as changing everyone's employee number by adding 1000 to the existing number).
  3. When importing or exporting one table at a time.

Note: Turning off integrity constraints temporarily speeds up these operations.

  1. Maintaining Data Integrity through Constraints.
  2. Oracle Check Constraint tips.
  3. Oracle/PLSQL: Check Constraints.

Sunday, May 13, 2012

JDK7: Using the diamond operator for constructor type inference

The use of the diamond operator simplifies the use of generics when creating an object. It avoids unchecked warnings in a program, and it reduces generic verbosity by not requiring explicit duplicate specification of parameter types. Instead, the compiler infers the type.

Dynamically-typed languages do this all the time. While Java is statically typed, the use of the diamond operator allows more inferences than before. There is no difference in the resulting compiled code.

The compiler will infer the parameter types for the constructors. This is an example of the convention over configuration. By letting the compiler infer the parameter type (convention), we avoid explicit specification (configuration) of the object. Java also uses annotations in many areas to affect this approach. Type inference is now available, whereas it was only available for methods before.

Getting ready…
To use the diamond operator:
  1. Create a generic declaration of an object.
  2. Use the diamond operator, <>, to specify the type inference that is to be used.
How to do it...
  1. Create a simple Java application with a main method. Add the following code example to the main method to see how they work. For example, to declare a java.util.List of strings, we can use the following:
  2. The identifier, list, is declared as a list of strings. The diamond operator, <>, is used to infer the List type as String. No warnings are generated for this code.
How it works…

When an object is created without specifying the data type, it is called a raw type. For example, the following uses a raw type when instantiating the identifier, list:

When the code is compiled, the following warnings are generated:

Note: packt\ uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

An unchecked warning is generated. It is generally desirable to eliminate unchecked warnings in an application. When the –Xlint:unchecked is used we get the following:

Before Java 7, we could address this warning by explicitly using a parameter type as follows:

With Java 7, the diamond operator makes this shorter and simpler. This operator becomes even more useful with more complex data types, such as, a List of Map objects as follows:

There's more...

There are several other aspects of type inference that should be discussed:
  1. Using the diamond operator when the type is not obvious.
  2. Suppressing unchecked warnings.

1- Using the diamond operator when the type is not obvious

Type inference is supported in Java 7 and later, only if the parameter type for the constructor is obvious. For example, if we use the diamond operator without specifying a type for the identifier shown as follows, we will get a series of warnings:

Compiling the program with –Xlint:unchecked, results in the following warnings:

These warnings will go away if the data type is specified as follows:

2- Suppressing unchecked warnings
While not necessarily desirable, it is possible to use the @SuppressWarnings annotation to suppress unchecked exceptions generated by the failure to use the diamond operator. The following is an example of this:

Saturday, May 5, 2012

JDK7: Using buffered IO for files in java

If you have read my previous article about NIO.2 "The power of java 7 NIO.2 (JSR 203) (important concepts)" you will get the full understanding and an overall idea about the new NIO.2 features including the buffered files operations.

Buffered IO provides a more efficient technique for accessing files. Two methods of the java.nio.file package's Files class return either a package BufferedReader or a BufferedWriter object. These classes provide an easy to use and efficient technique for working with text files.

We will illustrate the read operation first. In the There's more section, we will demonstrate how to write to a file.

Getting ready
To read from a file using a BufferedReader object:

1. Create a Path object representing the file of interest
2. Create a new BufferedReader object using the newBufferedReader method
3. Use the appropriate read method to read from the file

How to do it...
1. Create a new console application using the following main method. In this method, we will read the contents of the computers.txt file and then display its contents.

2. Execute the application. Your output should reflect the contents of the computers.txt file, which should be similar to the following:


How it works...
A Path object representing the computers.txt file was created followed by the creation of a Charset. The ISO Latin Alphabet No. 1 was used for this example. Other character sets can be used, depending on the platform used.

A try-with-resource block was used to create the BufferedReader object. This type of try block is new to Java 7.

This will result in the BufferedReader object automatically being closed when the block completes.

The while loop reads each line of the file. and then displays each line to the console. Any IOExceptions is thrown as needed.

There's more...
When a byte is stored in a file, its meaning can differ depending upon the intended encoding scheme. The java.nio.charset package's Charset class provides a mapping between a sequence of bytes and 16-bit Unicode code units. The second argument of the newBufferedReader method specifies the encoding to use. There is a standard set of character sets supported by the JVM, as detailed in the Java documentation for the Charset class.

We also need to consider:
• Writing to a file using the BufferedWriter class.
• Unbuffered IO support in the Files class.
• Writing to a file using the BufferedWriter class.

The new BufferedWriter method opens or creates a file for writing and returns a BufferedWriter object. The method requires two arguments, a Path object and a specified Charset, and can use an optional third argument. The third argument specifies an OpenOption. If no option is specified, the method will behave as though the CREATE, TRUNCATE_EXISTING, and WRITE options were specified, and will either create a new file or truncate an existing file.

In the following example, we specify a new String object containing a name to add to our computers.txt file. After declaring our Path object, we use a try-with-resource block to open a new BufferedWriter.

In this example, we are using the default system charset and StandardOpenOption.APPEND to specify that we want to append the name to the end of our users.txt file. Within the try block, we first invoke the newline method against our BufferedWriter object to ensure that our name goes on a new line. Then we invoke the write method against our BufferedWriter object, using our String as the first argument, a zero to denote the beginning character of the String, and the length of our String to denote that the entire String should be written.

If you examine the contents of the computers.txt file, the new name should be appended to the end of the other names in the file.

Un-buffered IO support in the Files class
While un-buffered IO is not as efficient as buffered IO, it is still useful at times. The Files class provides support for the InputStream and OutputStream classes through its new InputStream and new OutputStream methods. These methods are useful in instances where you need to access very small files or where a method or constructor requires an InputStream or OutputStream as an argument.

In the following example, we are going to perform a simple copy operation where we copy the contents of the computers.txt file to a newComputers.txt file. We first declare two Path objects, one referencing the source file, computers.txt, and one specifying our destination file, newComputers.txt. Then, within a try-with-resource block, we open both an InputStream and an OutputStream, using the new InputStream and new OutputStream methods. Within the block, we read in the data from our source file and write it to the destination file.

Upon examining the newComputers.txt file, you should see that the content matches that of the computers.txt file.