lunes, 31 de octubre de 2011

Java Stacks

Well we all know what batteries are in Java, but explain a little as we will need in the implementation of Nachos.

The first is to define the stack, ie, instantiate the class Stack.

Stack pila = new Stack();

When using a string, we mean what are strings.
The method that inserts elements in the stack is. Push (). This method will receive as a parameter the element to be inserted.

for (int x=1;x<=10;x++) pila.push(Integer.toString(x));

We have created a loop that we will create the numbers and we have relied on the Integer class and method. ToString () to convert numbers to strings.

Once we have all the elements, we proceed to drain the battery. We will have to interact on the stack until it is empty, which tells us the method. Empty (). In each iteration we extract an item from the stack using the method. Pop ().

while (!pila.empty())
System.out.println(pila.pop());



This is the code:

import java.util.*;
public class Pilando {
Stack pila = new Stack();
public void ingresar()
{
for (int x=1;x<=3;x++) { pila.push(Integer.toString(x)); } } public void sacar() { while (this.empty() == 1) { System.out.println("Sacando el elemento "+pila.pop()); } System.out.println("La pila esta vacía!!!"); } public void show1() { System.out.println("El primer elemento agregado es ---> "+pila.firstElement().toString());
}
public void show2()
{
System.out.println("El ultimo elemento agregado es ---> "+pila.lastElement().toString());
}
public void show3()
{
System.out.println("La cantidad de elementos es de ---> "+pila.size());
}
public int empty()
{
int valid = 1;
if(!pila.empty())
{
valid = 1;
}
else
if(pila.empty())
{
valid = 0;
}
return valid;
}
}

Analysis Algorithm

An algorithm is a finite set of unambiguous and effective instructions that indicate how to solve a problem, produce at least one outlet, receive zero or more inputs and to run, require a finite amount of resources.
The selection criteria are defined by CPU time, this implies that the program will be more efficient is faster.

For example Quicksort



In this case the strategy is to take an item and put in a pivotal part to the minor elements to the pivot and the other to those over the pivot. After recursive calls are made to order both parties. In this case also takes advantage of the fact that an array of size 1 isalready sorted.
The algorithm that starts the settlement into two parts (not equal) is defined in the algorithm 21. This algorithm part of the settlement into two parts using the first array element as a pivot.

In this case using the asymptotic analysis, we define:

Size of the problem: The array size
Basic Operation: Comparisons
To make the temporal analysis we noticed that the partitionalgorithm performs n comparisons. Informally this is because the pivot is compared against all array elements. We analyze each case quicksort.
Worst Case: Occurs when the pivot is the first element
Best case: The pivot is exactly half
In this case the array is divided into two parts of equal size.Assuming that n = 2k each party will have a size 2k 1 (for the pivot is now in order.)







Bibliography

- Thomas H. Cormen , Introduction to Algorithms, Mc Graw Hill, 2000
- Harel D., Algorithmics The Spirit of Computing, Addison Wesley, Segunda Edición, 1992
- Horowitz E., S. Sahni, Fundamentals of Computer Algorithms, Potomac, Maryland: Computer Science Press, Inc, 1978

Theorical Part II - Virtual Memory

Virtual Memory
David Berumen Salazar

"To give the illusion of unlimited ram for all of your programs" was one of the benefits virtual memory(VM) gave many years ago. Historically storaging devices have stayed behind in the miniaturization race; while the processors rapidly got smaller and faster, puting together millions and millions of transistors in a couple of centimeters, hard disks and ram still look like an ancient thing (SSD are a great step in this).

This gap between computing and data transfer power and the invention of multitasking were the motivations of VM. To overcome the hardware limitations, abstractions were developed and the idea behind them was to give to the user and kernel processes more address space sacrificing efficiency as less as possible, to make more things and get to use those much expensive processors, for example, using permanent storage devices to extend temporary ones by developing a VMM, but enough with history lets get to understand one of the dozens of ways to implement VMM, in this case, in Linux 2.6.

Overview of VMM in Linux 2.6 Kernel

Virtual Memory Management is one of the most complex things of any OS as it had been always developing to become more efficient and Linux its not the exception, if you want to understand the "Black Magic" behind this I recomend you "Understanding the Linux Virtual Memory Manager" by Mel Gorban, anyway I tried to minimize it as much as possible.

Much of the information I obtained came from "Understanding Virtual Memory In Red Hat Enterprise Linux " because I think it gives a pretty good overview. So if you want to learn for yourself check the pdf or ask me, I believe its an excellent exercise to look how it works a real VMM and understand that there are many ways to to VMM.

References:



And also I did some performance tests of the VM and paging


- PERFORMANCE TESTS

This are the slides of the Linux VM overview


Theorical Part II - File Systems

The slides for the Theorical part II





Systems Performance




direct comparison FAT vs EXT3

Ext3 vs FAT




First of all, the access time take 1ms  less on ext3
the read rate are 0.2 Mb/s more efficient on the ext3.
but the ext3 was released on november 2001, and the FAT system on August 1996, and the FAT systems works better with small files.



the pseudo-code for the File System.