C# Anonymous Functions and Lambda Expressions


Anonymous Functions are functions that are defined inline. i.e On the fly. Before i go into explaining the concepts the easier way is to go through some code. Below is a small program which will search for a particular name in a List and if found, write to the console name found

The below code shows how to implement such a program without using Anonymous Functions.



The below code shows how to implement the same Program WITH using Anonymous Functions.


And we get the same output with more compact code using lambda expressions.


Result

Our Code is more compact and easier to read and maintain. 

Explanation:

So what's happening is that in our first Program we are calling a function from our string.find method. The function that we have defined separately. On line 19

var output=names.Find(SearchName);
 Ok so the above code uses a Predicate Delegate.

Types of Delegates

1) Action
2) Func

More on this in the next installment

Using Jquery DataGrid with PHP and MySql as a Datagrid Part 2

Jquery Datatables has lots of options that allows fine control over the display of the table. The options can be provided as parameters at the time of initialization. Some of the options are
?

  
01
02
03
04
05
06
07
08
09
10
11
<script type="text/javascript">
$(document).ready(function(e) {
    $('.data-grid').dataTable(
     {
          "paging":   false,
          "ordering": false,
          "info":     false
     });
});
</script>

  
<script type="text/javascript">
$(document).ready(function(e) {
    $('.data-grid').dataTable(
     {
          "paging":   false,
          "ordering": false,
          "info":     false

     });
});
</script> 
 
 
To change the number of records displayed in a table, change the "pageLength" option. For example 
 
"pageLength": 50 
 
A full list of options is available at https://datatables.net/reference/option/ 
 
 

Using Jquery DataGrid with PHP and MySql as a Datagrid Part 1

Datagrid is a major requirement in any Database application. Not many free datagrids are available for PHP. Those that are available lack in features. In this series we are going to use Jquery DataTable along with PHP and MySql to create a Sortable, Filterable, Customizable datagrid.

 Simple MySql Recordset Display

Step 1: First Lets Add All the Required Libraries to the Page. 
?
1
2
3
<script src="https://code.jquery.com/jquery-1.12.4.js" type="text/javascript"></script>
<script src="https://cdn.datatables.net/1.10.15/js/jquery.dataTables.min.js"></script>
<link href="https://cdn.datatables.net/1.10.15/css/jquery.dataTables.min.css" rel="stylesheet" />

Step 2: Get the data from the database and display it in an Html Table

?

  
1
2
3
<table class="data-grid">
<thead>
<tr>

  
<table class="data-grid">
<thead>
<tr>

?

  
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<?php
// Define Database Connection Information
define("HOSTNAME","localhost");
define("USERNAME","root");
define("PASS","");
define("DB","world");
// Make a Connection
$connection=new mysqli(HOSTNAME,USERNAME,PASS,DB);
if(!$connection){
echo "Sorry No Connection";
exit;  
}
// Get Column Names of the Table
$results=$connection->query("SHOW COLUMNS from city");
// Print the Table
while($headers=mysqli_fetch_array($results,MYSQLI_NUM)){
echo "<th>".$headers[0]."</th>";
}
echo "</tr>";
echo "</thead><tbody>";
// Get Table Data from Database
$output=$connection->query("SELECT * from city");
//Output the Data
while($rs=mysqli_fetch_array($output,MYSQLI_NUM)){
echo "<tr>";
for($i=0;$i<count($rs);$i++){
    echo "<td>".$rs[$i]."</td>";
}
echo "</tr>";
}
?>
</tbody>
</tr>
</table>

  
<?php
// Define Database Connection Information
define("HOSTNAME","localhost");
define("USERNAME","root");
define("PASS","");
define("DB","world");

// Make a Connection
$connection=new mysqli(HOSTNAME,USERNAME,PASS,DB);

if(!$connection){
echo "Sorry No Connection";
exit;   
}
// Get Column Names of the Table
$results=$connection->query("SHOW COLUMNS from city");
// Print the Table
while($headers=mysqli_fetch_array($results,MYSQLI_NUM)){
echo "<th>".$headers[0]."</th>";

}
echo "</tr>";
echo "</thead><tbody>";
// Get Table Data from Database
$output=$connection->query("SELECT * from city");

//Output the Data
while($rs=mysqli_fetch_array($output,MYSQLI_NUM)){
echo "<tr>";

for($i=0;$i<count($rs);$i++){
    echo "<td>".$rs[$i]."</td>";
}
echo "</tr>";
}
?>
</tbody>
</tr>
</table>

Step 3: Finally initialize the datatable .

4
?

  
1
2
3
4
5
<script type="text/javascript">
$(document).ready(function(e) {
    $('.data-grid').dataTable();
});
</script>

  
<script type="text/javascript">
$(document).ready(function(e) {
    $('.data-grid').dataTable();
});
</script>

Save the File in your virtual directory. Load it in your browser and you should see


The Source File can be downloaded from here.


In the next installment we will look into loading data into the Table using Ajax.

Hadoop MapReduce

MapReduce is a core component of the Apache Hadoop software framework. The MapReduce components as the name implies Maps and Reduces. It distributes work to different nodes within a cluster/map (MAP) and organize the returned result into a result of the query being made (REDUCE).


There are three main components of MapReduce

  1. JobTracker: The node that manages all jobs in a cluster. It is also known as the master node. Jobs are divided into Tasks assigned to individual machines in a cluster. 
  2. TaskTracker: A component that takes tracks every task assigned to an individual machine.
  3. JobHistoryServer: This component tracks completed jobs.

MapReduce distributes input data and collate Results. It does so by operating in parallel across massive clusters. Jobs can be split across any number of servers. MapReduce is available in several languages. MapReduce libraries abstract Programmers from under the hood, and create task between having to worry about the intricacies of distributed computing paradigm.

Each node reports back to the master node. The master node can re-assign the task to any other node, if the child node doesn't report back. This makes MapReduce highly fault-tolerant, with the only single point of failure being the master node.



What is Hadoop



Hadoop is a framework to process huge amount of data across clusters of computers, using commodity hardware in a distributed computing environment. It can work on a single server or thousands of machines having their own storage. Hence it is a massively parallel execution environment that brings the power of supercomputing using only commodity hardware. Hadoop is primarily used for big data analytics. 
Hadoop should be classified as an ecosystem comprised of many components that range from data storage, to data integration, to data processing, to specialized tools for data analysts.

Hadoop Components


HDFS is a main component of Hadoop. It is a distributed File System able to run on commodity hardware. This is where the data is stored. It provides the foundation for other tools, such as HBase.

  1. MapReduce: Hadoop’s main execution framework is MapReduce, a programming model for distributed, parallel data processing, breaking jobs into mapping phases and reduce phases (thus the name). MapReduce is a core component of the Apache Hadoop software framework. Hadoop enables resilient, distributed processing of massive unstructured data sets across commodity computer clusters, in which each node of the cluster includes its own storage.
  2. HBase: A column-oriented NoSQL database. Simply put HBase is the DataStore for Hadoop and BigData. 
  3. Zookeeper: It is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. Zookeeper is Hadoop’s distributed coordination service. Specifically designed for distributed management. Many components of Hadoop depend on Zookeeper.
  4. Oozie: Oozie is Hadoop workflow scheduler. It schedules Hadoop Jobs.It is integrated with rest of Hadoop stack.
  5. Pig: Pig is a platform for analyzing large data sets. It consists of its own scripting language, PIG Latin which is translated by the compiler that produces MapReduce sequences.
  6. Hive: An SQL-like, high-level language It works like pig but translate Sql like queries into MapReduce sequences.
The Hadoop ecosystem also contains several other frameworks
  1. Sqoop: Tool to transfer data to and from Hadoop to relational databases. 
  2. Flume: Tool to move data from individual machines to HDFS.

Running Drupal in Docker

I will assume that you have already installed docker. If you haven't installed docker please visit https://www.docker.com/ to download a...