# 5 3 Mathematical Constraints Formulation

## 1. Agile Manifesto

We are uncovering better ways of developing software by doing it and helpingothers do it. Through this work we have come to value:Individuals and interactions over processes and toolsWorking software over comprehensive documentationCustomer collaboration over contract negotiationResponding to change over following a planThat is, while there is value in the items on the right, we value the items onthe left more. (Agile Manifesto authors, 2001)

## 2. Principles behind Agile Manifesto

The following principles are based on the Agile Manifesto. 1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. 2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. 3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. 4. Business people and developers must work together daily throughout the project. 5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. 6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. 7. Working software is the primary measure of progress. 8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. 9. Continuous attention to technical excellence and good design enhances agility. 10. Simplicity – the art of maximizing the amount of work not done – is essential. 11. The best architectures, requirements, and designs emerge from self-organizing teams. 12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.(Agile Manifesto authors, 2001)

## 4. Disadvantages of Agile Methodologies

So far, we have had a look at the pros of Agile Development. However,everything has its all weaknesses, and Agile is not different. 1. Active user involvement and close collaborator [4] * Although these requirements are critical success factors for any project, it depends on having a customer who is willing and able to spend time with the development team and who can represent all system stakeholders. In many cases, customer representatives have other demands on their time and cannot play a full part in the software development. Where there are external stakeholders, such as regulators, it is difficult to represent their views to the agile team. 2. Requirements emerge and evolve [2] * This creates the very meaning of agile – flexibility. Flexibility to change course as needed and to ensure delivery of the right product. This leads to potential for scope creep, which we all know can create the risk of ever-lasting projects. * Additionally, there is much less predictability, at the start of the project and during, about what the project is actually going to deliver, especially for large ones. * Even though the Agile Manifesto take the value of “Customer Collaboration over Contract Negotiation”, a contract is needed in any case. This disadvantage can also make it harder to define a business case for the project, and harder to negotiate fixed price projects. Without the maturity of a strong and clear vision, and the discipline of fixing time-scales and trading scope, this is potentially very dangerous. * Moreover, prioritizing changes can be extremely difficult, especially in systems for which there are many stakeholders. Typically, each stakeholder gives different priorities to different changes. 3. Testing is integrated throughout the life cycle [3] * This does have the effect of reducing some very significant risks, that can cause many projects to fail. However, it implies that testers are needed throughout the project and this effectively increases the cost of resources on the project. 4. Incremental delivery and Agile requirements are barely sufficient [3+10] * This mean less information available to new starters in the team about features and how they should work. * Moreover, rapid iterations and short-term planning for development does not always fit in with the longer-term planning cycles of business planning and marketing. Marketing managers may need to know what product features several months in advance to prepare an effective marketing campaign. 5. Harmony of the development team [5+9+11] * Individual team members may not have suitable personalities for the intense involvement that is typical of agile methods, and therefore may not interact well with other team members. * It can also create potential misunderstandings if the teamwork and communication aren’t at their best, and difficulties for team members (especially testers) that are used to everything being defined up front. * Clearly, as it requires a close collaboration, face-to-face conversation between each member of the team, it is not suitable for non-local team due to different in time-zone. 6. Frequently delivery [3+8+10] * Agile can be intense for the team. They need to really complete each feature 100% within each sprint or iteration, and the relentlessness of iterations, can be mentally quite tiring so it’s important to find a sustainable pace for the team. * Under pressure from delivery schedules, team members may not have time to carry out desirable system simplifications.

## 2.1. PSO Algorithm

PSO is a stochastic, iterative population-based evolutionary optimizationalgorithm. It was developed in 1999 by Shi and Eberhart [19]. It uses theswarm intelligence that is based on social-psychological and biological socialprinciples. By equivalence with the swarm intelligence, each swarm member(particle) takes advantage of private memory and has a degree of randomness inits movement as well as knowledge gained by the whole swarm to discover thebest available food source. The problem of a food search can be solved byoptimizing a fitness function. The definition of the communication structure(or social network) is obtained by assigning the neighbors for each swarm. Allparticles, in the search space, have fitness values which are evaluated by thefitness function to be optimized and have velocities which direct their motionin the multidimensional search space. Each particle remembers the informationabout its best solution and its position in the search space and both areavailable to its neighbors. In order to update the appropriate changes of itsposition and velocity, each particle has a memory holding: the particle “”position which presents the best solution the particle has seen by itself andthe global best particle location’s “” that the particle acquires bycommunicating with a subset of swarms. The th particle velocity and positionupdates are based on the following equations:where is the inertia factor thattakes linearly decreasing values downward from 1 to 0 according to apredefined number of iterations, is the velocity, is the current solution(or position), and are uniform random numbers in the range between 0 and 1,and and present positive constant parameters called “accelerationcoefficient.”

## 2.2. FCM Algorithm

FCM algorithm is a determinist, constructive optimization algorithm. Differentstudies prove that the FCM outperforms different existing clusteringalgorithms, that is, the Self-Organization Map (SOM) neural network algorithm[20], -mean algorithm [21], and hierarchical clustering [20]. It is the mostpopular fuzzy clustering method, which was originally proposed by Dunn [22]and later had been modified by James [23].FCM algorithm is efficient, straightforward, and easy to implement. It isbased on fuzzy behavior and provides a natural technique for producingclustering where membership weights have a natural interpretation but notprobabilistic (determinist). The main goal of the FCM is to minimize anobjective function, taking into account the similarity of elements and clustercenters.Suppose a set of objects in dimensional space, listed by . Each object isrepresented by a vector of quantitative variables defined by variablesindexed by , where . Suppose a set of prototypes listed by associated withgroups, where each prototype is also represented as a vector of quantitativevariables , where . Suppose a matrix of membership degrees, where presentsthe membership degree of the object to group . Its value belongs to the realinterval .FCM algorithm aims at finding a prototype matrix and a membership degreematrix that minimize : the objective function called “fitness function.” Theprototypes that minimize the objective function are updated using thefollowing equation:The membership degrees that minimize the objective functionare updated according to the following equation:where is the level of clusterfuzziness. In the limit , the membership degree converges to 0 or 1, whichimplies a good partitioning.FCM algorithm is an effective algorithm. It is faster than the PSO algorithmbecause it requires fewer function evaluations, but it is sensitive to initialvalues and usually falls into local optima. The weaknesses of these twoalgorithms motivate the proposal of an alternative approach based on thecombination of FCM and PSO algorithms to form a novel FCMPSO algorithm whichmaintains the merits of both PSO and FCM algorithms.

## 2.3. FCMPSO Algorithm

In this work, FCMPSO algorithm was proposed to solve both binary and extendedhardware/software partitioning problems. This algorithm takes togetheradvantages of both PSO and FCM algorithms: the PSO algorithm has a strongglobal search capability, while FCM algorithm produces approximate solutionsfaster and fails in a local optimal solution easily. Hence, we integrated theFCM algorithm with PSO algorithm to provide near-optimal solution with fasterspeed.Firstly, we applied the FCM algorithm to create the uncertain initialpartitioning solutions in order to reduce (limit) the research space of thePSO algorithm. Then, we execute the PSO algorithm to have a near-optimalpartitioning solution.The pseudocode of our FCMPSO algorithm is presented in Algorithm 1.Algorithm 1 (FCMPSO algorithm). = FCMPSO () Require: Dataset of FCMparameters: (1) Select the center of cluster (2) Select the number of objects(3) Select the maximum number of iterations (4) Select the level of clusterfuzziness () (5) Randomly initialize of object to group (6) ; ;Partitioning Objective Function () Dataset of PSO parameters: (1) Select themaximum number of iterations (2) Select the population (swarm) size: (3)Select the inertia factor “” that takes linearly decreasing values downwardfrom 1 to 0 (4) Select the uniform random numbers and in the range between 0and 1 (5) Select the acceleration coefficients and (positive constantparameters) FCM algorithm Repeatedly, while and (a) Update prototype matrix: fix the membership degree matrix and update prototypes using (3) (b) Updatethe membership degree matrix : fix the membership degrees using (4) (c) (d)Partitioning Objective Function () (e) end while Return Matrices Y and U PSOalgorithm (1) Initialize particle position with matrix of FCM outputalgorithm For each particle position to (dimension of ) (2) Compute thefitness values of (3) Compute velocity (4) Initialize to its initialposition: (5) Initialize to the minimal value of the swarm: = fitness value() End For Repeat until criteria are met () (1) Update particle’s velocity as(1) (2) Update particle’s position as (2) If , (1) Update the best knowposition of particle : If , (1) Update the swarm’s best position: (2) endreturnThe complexity analysis of the original PSO and FCM algorithm is asfollows:(i)The PSO algorithm is , where is the swarm population and is themaximum number of iterations.(ii)The FCM algorithm is , where is the objectsnumber, is the maximum number of iterations, and is the cluster center. Inour case, .(iii)The FCMPSO algorithm is , where is the dimension of thepopulation issue from the FCM algorithm, is the objects number, and is themaximum number of iterations.As can be seen, the computational complexity ofthe FCMPSO algorithm is mainly affected by the number of objects and thepopulation dimension issue from the FCM algorithm. The disadvantages of thePSO algorithm are that it is easy to fall into local optima in high-populationdimension and has a low convergence rate in the iterative process. It can alsobe observed that the computational complexity of the hybrid FCMPSO is acceptedwhen it is applied to solve the high-dimensional and complex problems.Our proposed FCMPSO algorithm will be tested for two kinds of partitioningapproaches: binary and extended ones. The main difference between these twokinds of architectures appears in the number of the used devices and theirtypes. In the binary partitioning approach, the target architecture includes asingle hardware processing unit and a single software processing unit or areconfigurable architecture. However, in extended partitioning approach, thetarget architecture includes multiprocessing hardware components and severalsoftware processing components.Before starting the hardware/software partitioning process, it is necessary totransform the initial specification into formal specification. Thebenchmarking scenario model used to validate our proposed hardware/softwarepartitioning problem is presented in the next section.

## 3. Benchmarking Scenario: Task Graphs

Different benchmarks and applications are used to validate hardware/softwarepartitioning approaches. These applications and benchmarks are varying fromeach other. The embedded application to be partitioned is generally given as aDirect Acyclic Graph (DAG) that represents the sequence of nodes in theembedded system application.In this work, we use Task Graph for Free (TGFF) tool to generate a set of 20,50, 100, 200, 500, 1000, and 2000 nodes graphs. Each graph is donated as ,where presents the set of tasks and are the set of edges which present thedata dependency between two nodes. The partitioning process aims to find apartition , where such that and . It can generate a deciding partitionvector , representing the implementation way of the task nodes.Such approach is necessary to avoid the system architecture dependencyvariation in the system by making parameter changes. The variations in thenumber of the input/output nodes, the node metrics (i.e., execution time,cost, area, and power), and the software/hardware processor number arerandomly assigned in the TGFF file input configuration file.The obtained graphs are used as a system specification to validate theefficiency of our proposed partitioning algorithm to solve both binary andextended partitioning problems.

## 4. Hardware/Software Partitioning as a Binary Problem

This section provides the formal description of the binary partitioningproblem, especially the used target architecture and the mathematical model ofthe objective function and the related constraints.

## 4.3. Mathematical Constraints Formulation

During the hardware/software partitioning process, based on the constraintdefinition in the previous subsection, the total hardware/software executiontime (), the used slices rate constraint (), and the memory requirement (M)can be formalized as follows.(i) Execution Time Constraint. and present the execution time of,respectively, software and hardware implemented solution. They can beexpressed as follows:where is the total number of tasks in the system andand are the software and hardware execution time, respectively, of each thtask. presents a binary variable. Its value is 0 if the task is assigned toa software component or 1 if the th task is assigned to a hardware component.(ii) The Memory Requirement. presents the memory requirement only forcomponents assigned to software architecture. The total memory is obtained asfollows:where is the software cost for a th task.(iii) The Used Slices Rate. presents the number of slices that a partitioningsolution will use in a particular hardware component. The total used slicesrate is obtained as follows:where is the hardware cost of the th task.Partitioning algorithms try to find a trade-off between these conflictingconstraints to improve the system performance. It can be modeled as theminimization of the objective function (F.F.) as follows:The targetarchitecture was generally assumed to consist of only one software and onlyone hardware unit. In recent years, many researchers are committed to resolvethe extended hardware/software partitioning problems: for multiprocessorsystems with a high quality solution. In the next section, we will introducethe formal description of the extended hardware/software partitioning problemproposed in this work.

## 5. Hardware/Software Partitioning as an Extended Problem

This section provides the formal description of the extended architecture.

## 5.3. Mathematical Constraints Formulation

In this work, the fitness function () to minimize is defined as follows:wherewe define the following.(i) Execution Time. presents the time taken at each node. It can be expressedas follows:where is number of processors. presents a binary variable whosevalue is “00” if the task is assigned to a component, “01” if the task isassigned to a component, “10” if the task is assigned to a component, and“11” if the task is assigned to a component. is the execution time for theth task.(ii) Used Slices Rate. presents the number of slices that a partitioningsolution will use in a particular hardware component. The total used slicesrate is obtained as follows:where is the total number of tasks in the systemand is the hardware cost for the th task.(iii) The Memory Requirement. presents the memory requirement only forcomponents assigned to software architecture. The total memory is obtained asfollows:where is the software cost for a th task.

## 6.1. Empirical Results and Discussion for Binary Partitioning Approach

The binary partitioning problem was solved considering different task graphsranging from 20 to 2000 nodes. The values of software execution time ,hardware execution time , memory requirement , and used slices rate weregenerated randomly as described in Section 4.2.First, we have proceed to simulate standards PSO, ACO, GA, and FCM and SAalgorithms were performed. The aim is to determine the best parameters of eachalgorithm that can give the better solutions and compare the partitioningresults. We consider the following heuristics algorithms parameters:(i)For SAalgorithm, the parameters used were as follows: initial temperature = 10,final temperature = 0, and .(ii)For GA algorithm, the parameters used were asfollows: population of 100 individuals, selection rate = 0.5, mutation rate =0.2, and crossover rate = 0.5.(iii)For PSO algorithm, the parameters used wereas follows: population of 100 individuals, , and inertia weight .(iv)For ACOalgorithm, the parameters used were as follows: the evaporation parameter is0.95 and the positive and negative pheromone were 0.06 and 0.3,respectively.(v)For FCM algorithm, the parameters used were as follows: ,cluster center , and the scalar termed , where presents the number of nodesin the task graph.Each algorithm was executed 10 times. In each executiontime, the evaluations of the objective function were made 100 times. The bestsolutions were always taken. Performance analysis for both processing time andquality of the cost solution found by the considered algorithms is given inFigures 3 and 4, respectively.The processing time comparison of standards partitioning algorithms, presentedin Figure 3, proves that FCM algorithm is faster than PSO, SA, GA, and ACOalgorithms because it requires fewer function evaluations. Figure 3demonstrates also that the PSO algorithm takes a considerable time comparingto FCM algorithm to discover and evaluate the input random particles becauseof the large input swarm (50 particles).Moreover, Figure 4 reveals that the quality result, which gives a measure ofthe fitness function cost percentage of solution, is found to be the best forPSO algorithm.Results demonstrate also that PSO algorithm has a strong global searchcapability, while the FCM algorithm generates approximate solutions faster.Hence, a combination between the FCM and PSO algorithms allows the generationof a near-optimal solution with faster speed.The simulation results of the proposed FCMPSO algorithm as well as theoriginal FCM, PSO, ACO, GA, and ACO algorithms are presented in Figures 5 and6.Figure 5 reveals that the FCMPSO algorithm enhances the processing timeconvergence of the PSO algorithm when combining it with FCM clusteringalgorithm by minimizing its input population number and limiting its “localsearch” space.The cost comparison provided in Figure 6 reveals the efficiency of theproposed algorithm to generate the optimal solutions comparing with theoriginal PSO and FCM algorithms. We can also observe that the cost ofgenerated solutions becomes more and more close when the number of nodesdecreases within 2000 nodes’ graph size; the variation between the FCMPSO costand the FCM cost presents 11.11% while it is less than 4% with 20 nodes’ graphsize. This variation is due to the role that FCM algorithm plays in improvingthe convergence of the PSO by minimizing its “local search input population”to generate an improved optimal solution.However, FCMPSO algorithm obtains significant improvements over FCM and PSOalgorithms in terms of processing time and generated cost solution in binarypartitioning approach.To measure the efficiency of combining FCM and PSO algorithms, we proposed tocompare it to hybrid combinations including PSO then GA, GA then SA, GA thenACO, ACO then SA, FCM then GA, FCM then SA, and finally ACO followed by FCM.Simulated results are shown in Figures 7 and 8.the algorithm processing time of FCMPSO is 0.033 seconds while that of FCM-SAis 0.24 seconds. This result improves that around 86% of speed reduction is infavor of FCMPSO.As shown in Figures 9, 10, 11, and 12, the best cost of FCMPSO is 170.19 whileit is 170.61 for FCM-GA for 20 nodes’ graphs. This result represents around0.25% improvement in the result quality in favor of FCM-GA.Recently, many approaches are committed to researching how to improve theperformance of hardware/software partitioning in multiprocessor systems.Different solutions have been developed. In the next section, we will provethe efficiency of our proposed algorithm in an extended partitioning approach.