Each function takes in input a query and returns in output a set of arrays.
final_prod: The name of the final product
task: The name of the process steps
task_type: The type (Manufacturing, Handling, Assembly, Quality Control, Loading, Unloading) of the process steps
successor: The successive process step
time: The service time of the process steps
machine: The station where the process steps are executed.
component: The name of the components
ass_task: The name of the process steps on which the components are assembled.
capacity: The capacity of the buffers.
pred: The name of all the physical elements of the system\
succ: The downstream element of each physical element.
The elements of these arrays are returned in random order. To develop each function the steps below are followed: (The code below is related to the function process_plan)
Import the required libraries.
from SPARQLWrapper import SPARQLWrapper, JSONimport numpy
Write the query as a text file. The SPARQL language has 4 main components:
PREFIX clause to find the documents with the given prefixes.
FROM and FROM NAMED clauses specify the RDF dataset to be addressed.
SELECT clause identifies the variables to appear in the results.
WHERE clause provides the basic graph pattern to match against the data graph.
PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#>PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>PREFIX owl:<http://www.w3.org/2002/07/owl#>...select distinct ?parttype ?pplan ?task ?taskclass ?succ ?timeDetFROM <http://www.ontoeng.com/AssemblyLine> FROM NAMED <http://www.ontoeng.com/AssemblyLine>FROM <http://www.ontoeng.com/VFLibProduct> FROM NAMED <http://www.ontoeng.com/VFLibProduct>FROM <http://www.ontoeng.com/VFLibStations> FROM NAMED <http://www.ontoeng.com/VFLibStations>FROM <http://www.ontoeng.com/VFLibBuilding> FROM NAMED <http://www.ontoeng.com/VFLibBuilding>...WHERE{# get process plans?pplan rdf:type owl:NamedIndividual .?pplan rdf:type/rdfs:subClassOf* ifc:IfcTaskType .?parttype ifcext:hasAssignedObject|^ifcext:hasAssignmentTo ?pplan .?parttype rdf:type/rdfs:subClassOf* factory:ArtifactType ....}
Import the text file of the query, convert it to a string, pass the string as SPARQL query, interrogate the endpoint and return the results.
# open text file in read mode text_file =open("Query/ProcessPlan.txt", "r")# convert the file to a string data = text_file.read()# close file text_file.close() sparql =SPARQLWrapper(" http://mi-eva-d001.stiima.cnr.it/fuseki/VLFT")#endpoint sparql.setQuery(data) sparql.setReturnFormat(JSON) results = sparql.query().convert()
Convert the results to arrays.
array1 = [] array2 = [] array3 = [] ...for result in results["results"]["bindings"]:if'resource'in result: array1.append(result["parttype"]["value"]) array2.append(result["pplan"]["value"]) array3.append(result["task"]["value"]) ...
Convert the values from URI (Uniform Resource Identifier) format to short URIs.
The function takes in input the results from DataQuery and returns in output a set of arrays.
order_machine: The station where the process steps are executed.
order_task_type: The type (Manufacturing, Handling, Assembly, Quality Control, Loading, Unloading) of the process steps
order_time: The service time of the process steps
order_capacity: The capacity of the buffers
order_ass: The name of the assembly tasks
part: The name of the parts
part_type: The type (Final, Component) of the parts.
To develop the function the steps below are followed:
Import the required functions and the related input arrays.
from Query.DataQuery import process_plan,components,buffers [final_prod,p_plan,task,task_type,successor,time,machine]=process_plan() [component,ass_task]=components() capacity=buffers()
Control that just one process plan and one final product exist and the correspondence between assembly tasks and components.
for i inrange(0,len(final_prod)):for j inrange(0,len(final_prod)):if final_prod[i]!=final_prod[j]:raiseValueError('Error, there is more than one final product') ...for i inrange(0,len(task)):if task_type[i]=='AssemblyTask':if task[i]notin ass_task:raiseValueError('Error, the assembly tasks do not correspond to the components') ...
Order the input arrays based on the precedence relations in the process plan and create the order_arrays.
order_task = []for i inrange (0,len(task)):if task[i]notin successor:#the task that is not successor of any other is the first task actual_task=task[i]#scalar indicating the last task to be added to the ordered sequence order_task.append(task[i])#the first task is added to the ordered sequenceiflen(order_task)!=1:raiseValueError('Error, there is more than one first station') i=0while i<len(task):if task[i]==actual_task and successor[i]!='0': order_task.append(successor[i])#the successor of the actual task is added to the ordered sequence actual_task=successor[i]#the actual task is updated i=0else: i=i+1 ...
Order the physical elements based on their connections and control that they allow the execution of the process plan in terms of sequence of process steps.
current =-1for i inrange(0, len(order_machine)): j =0 flag =1while flag ==1and j <len(order_elem):if order_machine[i]== order_elem[j]:if j < current:raiseValueError('Error, the physical connections do not correspond to the process connections') current = j flag =0 j = j +1
The above code is inserted as comment in the scripts because some of the physical connections resulting from the query Elements.txt are wrong with respect to the designed ones:
Element PP1.PPs_FloatingY instead of PP1
Elements PPH1 and PI1 are not successors of any other element
PP1 is downstream element of itself
RPP1 is downstream element of both T1 and RT1.
If the code is run then it will return 'Error'.
Define the array of the parts.
part = [] part.append(final_prod[0])#define the final productiflen(ass_task)==len(component):#the components must be one more than the assembly tasks part.append(str(final_prod[0])+'Component')for i inrange(0, len(order_component)): part.append(order_component[i]) part_type = ['final'] #define the type of partsfor k inrange(1, len(part)): part_type.append('component')
Automatic model generation
The file DataModel has to be run by the user. NB: The current version of data_model run under these assumptions:
Only one final product is produced, so just one process plan exists. The service time of the stations depends just on the task and not on the served class.
The configuration of the system is a line without parallel flows.
Only the first component can be processed on its own before being assembled, indeed all the other components visit a join node as first object.
If one station performs n tasks, then n stations are created, each one in charge of one task only.
If n components are joined within the same task, then (n-1) joins are created in sequence. This set of joins is positioned before the station in charge of the assembly task.
The final product visits all the joins and, at each one, a component is assembled on it. At last the station processes the final product.
Join objects are visited by parts just before the related assembly station because that allows to properly consider starvation.
The service and interarrival times are deterministic.
To develop the code the steps below are followed: (only the second step requires the user to insert manually input data)
Import the required libraries, functions and the related input arrays.
from Query.DataProcess import data_processfrom jsimIO import* [machine,machine_type,time,capacity,part,part_type]=data_process()
Set the simulation options (optional, only if flag=1) and the interarrival time of the parts (mandatory).
flag=1# flag 0 if the simulation options are the ones of default, 1 if they are specified by the userif flag==1: options = Default.ModelOptions options["maxSamples"]="100000000" options["maxEvents"]="100000000" options["maxSimulated"]="10000" options["maxTime"]="100000000" arrival_time=[10]
Verify if the process is a manufacturing or assembly one. The modelling of the two cases is carried out with two separated procedures.
process_type=0#0 if manufacturing process, 1 if assembly process z=0while z<len(machine_type):if machine_type[z]=='AssemblyTask':#process_type = 1 if there is at least one assembly task process_type=1 z=len(machine_type)else: z=z+1
Create the Model instance, if flag=1 consider the options.
if flag==1: model =Model("Modello_DataModel",options)else: model =Model("Modello_DataModel")
for j inrange(0,len(machine)):exec('M'+str(j +1)+'.set_service(product, Dist.Determ('+str(time[j])+'))')
5.1.4) Define the routings.
source.add_route(product, M1, 1) k =0while k < (len(machine)-1): k = k +1exec('M'+str(k) +'.add_route(product,M'+str(k +1) +',1)')exec('M'+str(k +1) +'.add_route(product,sink,1)')
Assembly process procedure
5.2.1) Create an array of component1,...,component(len(products)).
k=0 component=[]for i inrange (0,len(part)-1): k=k+1 component.append("component_%i"% k)
5.2.3) Define the final product class and the related fork, the component classes and the related joints.
for i inrange (0,len(part)):if part_type[i]=='final':#the first product should be always the final one by construction final =OpenClass(model, '"'+str(part[i])+'"', source, Dist.Determ(arrival_time[0])) fork =Fork(model, "Fork", '"'+str(part[i])+'"')else:globals()["component_%i"% i] =OpenClass(model, '"'+str(part[i])+'"', fork, Dist.Determ(arrival_time[0]))for j inrange(1,(len(component))):globals()["Join%i"% j] =Join(model, '"Join'+str(j)+'"', final, 2)
5.2.4) Define the machines and set their capacity.
for k inrange (0, (len(machine))):globals()['M%i'% (k+1)] =Station(model, '"'+str(machine[k])+'"', buffer_size=capacity[k])
5.2.5) Set the service times. All the Manufacturing tasks before the first assembly one, are executed on the first component (created ad-hoc as 'final product_Component' in the function data_processing). The successive tasks are executed on final (the final product).
i=0 count=0#counts the number of machining operations done on the first component before starting the assembly processwhile machine_type[i]!='AssemblyTask': i=i+1 count=count+1for j inrange(0,count):#the machines process component_1exec('M'+str(j +1)+'.set_service('+str(component[0])+', Dist.Determ('+str(time[j])+'))')for k inrange (count, (len(machine))):#the machines process finalexec('M'+str(k+1)+'.set_service(final, Dist.Determ('+str(time[k])+'))')
5.2.6) Define the routings. Until the first assembly task, the first component is forked from final and processed by all the required machines. If the first station executes a manufactuirng task, then the fork for the first components is connected to a machine, otherwise it is connected to the first join.
nMachine=0#tracks of machines and joins for which the routing is already defined nJoin=1 source.add_route(final, fork, 1)for i inrange(1, len(component)):# Define the fork entity for each component except the first oneexec('fork.add_route('+str(component[i])+',Join'+str(i)+',1)')if machine_type[0]!='AssemblyTask':#define the fork entity for the first componentexec('fork.add_route('+str(component[0]) +', M1, 1)') nMachine=nMachine+1while machine_type[nMachine]!='AssemblyTask':#set the routing of the first componentexec('M'+str(nMachine)+'.add_route('+str(component[0])+',M'+str(nMachine+1)+',1)') nMachine=nMachine+1 if machine_type[nMachine] == 'AssemblyTask': #(always true once the while cycle ends) define the join entity for the first component
exec('M'+str(nMachine) +'.add_route('+str(component[0]) +',Join1,1)')while nJoin<len(assembly)and assembly[nJoin-1]==assembly[nJoin]: nJoin=nJoin+1exec('Join'+str(nJoin-1) +'.add_route(final,Join'+str(nJoin)+', 1)')exec('Join'+str(nJoin) +'.add_route(final, M'+str(nMachine+1) +', 1)') nMachine=nMachine+1 nJoin=nJoin +1else:#define the fork and join objects for the first componentexec('fork.add_route('+str(component[0]) +',Join1,1)')while nJoin<len(assembly)and assembly[nJoin -1]== assembly[nJoin]: nJoin = nJoin +1exec('Join'+str(nJoin-1) +'.add_route(final,Join'+str(nJoin) +', 1)')exec('Join'+str(nJoin) +'.add_route(final, M'+str(nMachine +1) +', 1)') nJoin=nJoin+1 nMachine=nMachine+1
All the other components are forked by final and their routing includes only one join each because, after the first join object is visited, all the machines process final. Check if the routing is defined till the last station, if so then the last station is connected to the sink, otherwise continue.
iflen(machine)-nMachine==0:#if only the last station remains then it is connected to the sinkexec('M'+str(nMachine) +'.add_route(final, sink, 1)')else:whilelen(machine)-nMachine!=0:#define the routing for final until the second-last stationif machine_type[nMachine]=='AssemblyTask':while nJoin<len(assembly)and assembly[nJoin-1]== assembly[nJoin]: nJoin = nJoin +1exec('Join'+str(nJoin-1) +'.add_route(final,Join'+str(nJoin) +', 1)')exec('M'+str(nMachine) +'.add_route(final, Join'+str(nJoin) +', 1)') nMachine=nMachine+1exec('Join'+str(nJoin) +'.add_route(final, M'+str(nMachine) +', 1)') nJoin=nJoin+1elif machine_type[nMachine]!='AssemblyTask':exec('M'+str(nMachine)+'.add_route(final,M'+str(nMachine+1)+',1)') nMachine=nMachine +1exec('M'+str(nMachine) +'.add_route(final, sink, 1)')#connect the last station to the sink
Run the simulation.
model.add_measure() path = model.write_jsimg() ret = model.solve_jsimg()print(ret)
Results
A folder named "nameof_model""date"_"time" is created.
It contains an input file .jsimg for the simulation in JSIM, a file .jsim with the results of the simulation and a file .csv containing the log of the simulation.
The LOG table is created as shown below. The columns indicate in order: Loggername: referred to the name of the station Timestamp Job_ID: referred to the number of the job Class_ID: referred to the name of the part type.