Faculty Members researchs

أنت هنا

Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The Nonlinear Dynamic State neuron (NDS) is analysed to further understand the model and improve it. Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and synchronization. As suggested by biologists, these properties may be exploited and play vital role in carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the model is investigated to overcome some of these limitations in order to enhance the model. Therefore, the model’s parameters are tuned and the resulted dynamics are studied. Also, the discretization method of the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealed some facts regarding the NDS attractor and suggest the stabilization of a large number of unstable periodic orbits (UPOs) which might correspond to memories in phase space. ...
Further analysis and experimentation is carried out in this paper for a chaotic dynamic model, viz. the Nonlinear Dynamic State neuron (NDS). The analysis and experimentations are performed to further understand the underlying dynamics of the model and enhance it as well.  Chaos provides many interesting properties that can be exploited to achieve computational tasks. Such properties are sensitivity to initial conditions, space filling, control and synchronization. Chaos might play an important role in information processing tasks in human brain as suggested by biologists. If artificial neural networks (ANNs) is equipped with chaos then it will enrich the dynamic behaviours of such networks. The NDS model has some limitations and can be overcome in different ways. In this paper different approaches are followed to push the boundaries of the NDS model in order to enhance it. One way is to study the effects of scaling the parameters of the chaotic equations of the NDS model and study the resulted dynamics. Another way is to study the method that is used in discretization of the original R¨ossler that the NDS model is based on. These approaches have revealed some facts about the NDS attractor and suggest why such a model can be stabilized to large number of unstable periodic orbits (UPOs) which might correspond to memories in phase space. ...
Performance Comparison of Chemical Reaction and Ant Colony Optimization Methods in Allocating Maximally Distant Codes Waleed Nazeeh Ahmed

Error correcting codes, also known as error controlling codes, are set of codes with
redundancy that allows detecting channel errors. This is quite useful in transmitting data over a
noisy channel or when retrieving data from a storage with possible physical defects. The idea
is to use a set of code words that are maximally distant from each other, hence reducing the
chance of changing a code word to another valid one due to noise. The problem can be
viewed as picking v codes out of u=2k available codes of k-bit each, such that the aggregate
.hamming distance is maximized
Allocating such sets of codes is an optimization problem, which can be described in terms of
several components; an objective function f, a vector of variables X = {x1, x2, . . . , xn}, and a
vector of constraints C = {c1, c2, . . . , cm} which limit the values assigned to X, where n and m
correspond to the problem dimensions and the total number of constraints, respectively. Then,
the solution s, is the set of values assigned to X confined by C, and the solution space S is the
set of all possible solutions. The goal is to find the minimum solution s' S where f (s') f (s) for
.all s
Due to the large solution spaces of such problems, greedy algorithms are sometimes used to
generate quick and dirty solutions. However, evolutionary search algorithms; genetic
algorithms, simulated annealing, swarm particles, and others, represent...

Adapting the Chemical Reaction Optimization Algorithm to the Printed Circuit Board Drilling Problem Waleed Nazeeh Ahmed

Printed Circuit Board (PCB) fabrication process throughput highly depends on the time of the holes
drilling stages, which is directly related to the number of holes and the order by which the drill bit
move over the holes. A typical PCB may have tens of hundreds of holes, pin pads and vias, and
optimizing the time to complete the drilling can significantly affect the production rate. Moreover, the
holes may be of different sizes and to drill two holes of different diameters consecutively, the head of
the machine has to move to a tool box and change the drilling equipment. This is quite time
consuming, thus it is better to partition the holes based on the diameter, and drill all holes of the same
diameter, change the drill bit, then drill the holes of the next diameter, and so on. In this case, the
drilling problem can be viewed as a series of TSPs, one for each hole diameter and the aim is to
minimize the total travel time for the machine head.
The Travelling Salesman Problem (TSP) is a well known NP-hard optimization problem that
exemplifies many real life and engineering problems like scheduling problem, and the PCB drilling
optimization is an example of such problems. Finding an optimal solution to the TSP maybe
prohibitively large as the number of possibilities to evaluate in an exact search, brute force search, is
(n-1)!/2 for n-holes. There exist too many algorithms to solve the TSP in an engineering sense; semioptimal

Analyzing a chaotic spiking neural model: The NDS neuron Waleed Nazeeh Ahmed

Chaos has important properties that can be exploited to carry out information processing tasks. Such properties are sensitivity to initial conditions, control and synchronization. It has been suggested by biologists that chaos plays an important role in information processing tasks in human brain. One of the chaotic neural networks that has been developed recently is the Nonlinear Dynamic State (NDS) neuron. The model has some limitations and can be enhanced in different ways. There are three aims of this research; one of them is to study the effects of scaling factors of the chaotic attractor of the NDS model - which is based on Rössler model. This research is also aims to reconsider the analytical solutions by tuning the parameters of the model. This research aims to enhance the NDS model in terms of stabilization so that the suggested large number of memories can be exploited. While the Hopfield neural network can give 0.15n memory size (where n is the number of neurons) one NDS neuron may theoretically give an access to large number of unstable periodic orbits (UPOs) which corresponds to memories in phase space.

Principal Investigator: Dr. Mohammad Alhawarat
Co- Investigators: Waleed Nazih and Mohammad Eldesouki

Error-Correction-Code Allocation Using The Chemical Reaction Optimization Algorithm Waleed Nazeeh Ahmed

One of the fundamental problems in coding theory is to determine, for given set of parameters q, n and d, the value Aq(n,d), which is the maximum possible number of code words in a q-ary code of length n and minimum distance d. Codes that attain the maximum are said to be optimal. Being unknown for certain set of parameters, scientists have determined lower bounds, and researchers investigated the use of different evolutionary algorithms for improving lower bounds for a given set of parameters. In this project, we interested in finding the set of maximally distant codes for a certain set of parameters, to provide for error detection and/or correction features. For a practically sized problem, it forms a challenge due to the prohibitively large solution space.

Generally, the optimization is a process with several components: an objective function f, a vector of variables X = {x1, x2, . . . , xn}, and a vector of constraints C = {c1, c2, . . . , cm} which limit the values assigned to X, where n and m correspond to the problem dimensions and the total number of constraints, respectively. Then, the solution s, is the set of values assigned to X confined by C, and the solution space S is the set of all possible solutions. The goal is to find the minimum solution s'∈ S where f (s') ≤ f (s) for all s.

Evolutionary optimization algorithms; genetic algorithms, simulated annealing, ant colony, swarm particles, and others, represent good alternatives...

A Soft-Quantization Based Methodology for Encoding True-Color Images for Optimal Viewing on Palette-Oriented Displays Waleed Nazeeh Ahmed

Digital images are modeled as a fine grid of 2D points; each is called a pixel which is the short name of picture element. The color of each pixel is modeled as a discrete 3D vector whose elements are the red component, the green component, and the blue component:  that can express any perceivable color by the human visual system. [6, 19]

Virtually all the high end modern digital displays can realize this model with a resolution of 8 bits (i.e. 256 quantum levels) or more for each color component, which is known as true-color digital image display. However, there remains a need on a wide-scale to deal with devices and setups with a limited (sometimes very limited) color display capabilities that each pixel can only be switched to one of a (relatively) small set of colors called palette. Here are few examples of such devices and setups that are in use on wide scale:

* Certain display modes of operating systems e.g. MS-Windows’ safe mode, image formats; e.g. GIF, fast previewing of archived images; e.g. while browsing … etc.

* Printing devices with limited color capabilities; e.g. monochromatic printers, faxes … etc.

* Displays of low-end of digital gadgets; e.g. watches, calculators, wireless/cell phones … etc.

* Old fashioned sizeable electronic ad boards serving in public places (that might be too expensive to replace).

* Sizeable mosaic and mosaic-like image compositions.

* ... etc.

To carry out the classic fundamental task of optimally displaying a true-color digital image on a palette-oriented...

Probabilistic Vector Quantization for Discrete HMM-Based Learning: Type-Written Arabic OCR; a Case Study Waleed Nazeeh Ahmed

Vector quantization (VQ) is a fundamental signal processing operation that attributes a given point in a multidimensional space (i.e. vector) to one of the centroids in a codebook, that in turn is inferred (via some offline codebook-making algorithm like LBG, K-means ... etc.) to optimally represent a population of points (e.g.. features) corresponding to some observable phenomenon [12, 17, 18, 24]. VQ is typically implemented via a "minimum distance" criterion that in turn is an instance of the hard deciding "winner-takes-all" policy.

Our intended project, on the other hand, introduces a novel probabilistic criterion to VQ (ProVQ) that is an instance of a fairer soft-deciding approach. Our probabilistic VQ builds a probability distribution for the belonging of some given point/vector to each centroid in the codebook that is inversely proportional to the distances (i.e. directly proportional to the closeness) between that point and all the codebook’s centroids. The actual runtime arbitration of the given point to a specific centroid is decided via a random election simulator following that probability distribution.

We speculate that our ProVQ - that results in smooth edges separating the different classes – will mitigate the negative effect of over-fitting that degrades the performance of machine learning/classification systems incorporating VQ [11], and may also make these systems more robust with the inevitable noise superimposed to their inputs.

To experimentally attest such a speculation, we will incorporate ProVQ in one of the state-of-the-art discrete HMM-based Arabic type-written OCR systems [2, 3, 12, 32], and hence compare its recognition performance with the...

Processing the Text of the Holy Quran: a Text Mining Study محمد عمر الحوارات

 The Holy Quran is the reference book for more than 1.6 billion of Muslims all around the world Extracting information and knowledge from the Holy Quran is of high benefit for both specialized people in Islamic studies as well as non-specialized people. This paper initiates a series of research studies that aim to serve the Holy Quran and provide helpful and accurate information and knowledge to the all human beings. Also, the planned research studies aim to lay out a framework that will be used by researchers in the field of Arabic natural language processing by providing a ”Golden Dataset” along with useful techniques and information that will advance this field further. The aim of this paper is to find an approach for analyzing Arabic text and then providing statistical information which might be helpful for the people in this research area. In this paper the holly Quran text is preprocessed and then different text mining operations are applied to it to reveal simple facts about the terms of the holy Quran. The results show a variety of characteristics of the Holy Quran such as its most important words, its wordcloud and chapters with high term frequencies. All these results are based on term frequencies that are calculated using both Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) methods.

Aldaej, A., Krause, P. (2014). An Enhanced Approach to Semantic Markup of VLEs Content Based on Schema.org. In 4th Int. Workshop on Learning and Education with the Web of Data–13th Int. Semantic Web Conference. Riva del Garda, Italy. عبدالعزيز عبدالله الداعج

Coming soon..

QR Code for https://cces.psau.edu.sa/ar/sources/research/3