http://46.28.109.63/index.php/mendel/issue/feedMENDEL2024-06-30T16:48:14+02:00Radomil Matoušekmendel.journal@gmail.comOpen Journal Systems<p>MENDEL Soft Computing Journal is an Open Access international journal<br>dedicated to rapid publication of high-quality, peer-reviewed research<br>articles in fields covered by Evolutionary Computation, Genetic<br>Programming, Swarm Intelligence, Neural Networks, Deep Learning, Fuzzy<br>Logic, Big Data, Chaos, Bayesian Methods, Optimization, Intelligent<br>Image Processing, and Bio-inspired Robotics.<br><br>The journal is fully open access, meaning that all articles are<br>available on the internet to all users immediately upon publication<br>(Gold Open Access). The journal has paper and electronic versions, and<br>the issues are Semi-Annual, in June and December. As well, under the<br>decision of the Editorial Board special issues may be published.<br><br>The journal is published under the auspices of the Institute of Automation and<br>Computer Science of the Brno University of Technology.</p>http://46.28.109.63/index.php/mendel/article/view/299Quick Hidden Layer Size Tuning in ELM for Classification Problems2024-06-30T15:46:41+02:00Audi Albtoushowdaybtoush@mutah.edu.joManuel Fernandez-Delgadona@na.comHaitham Maaroufna@na.comAsmaa Jameel Al Nawaisehna@na.com<p>The extreme learning machine is a fast neural network with outstanding performance. However, the selection of an appropriate number of hidden nodes is time-consuming, because training must be run for several values, and this is undesirable for a real-time response. We propose to use moving average, exponential moving average, and divide-and-conquer strategies to reduce the number of training’s required to select this size. Compared with the original, constrained, mixed, sum, and random sum extreme learning machines, the proposed methods achieve a percentage of time reduction up to 98\% with equal or better generalization ability.</p>2024-07-15T00:00:00+02:00##submission.copyrightStatement##http://46.28.109.63/index.php/mendel/article/view/301Speckle Noise Suppression in Digital Images Utilizing Deep Refinement Network2024-06-30T15:47:29+02:00Mohamed AbdelNassermohamed.abdelnasser@aswu.edu.egEhab Alaa SalehPop_29918@hotmail.comMostafa I. Solimanm.soliman@aswu.edu.eg<p>This paper proposes a deep learning model for speckle noise suppression in digital images. The model consists of two interconnected networks: the first network focuses on the initial suppression of speckle noise. The second network refines these features, capturing more complex patterns, and preserving the texture details of the input images. The performance of the proposed model is evaluated with different backbones for the two networks: ResNet-18, ResNet-50, and SENet-154. Experimental results on two datasets, the Boss steganography, and COVIDx CXR-3, demonstrate that the proposed method yields competitive despeckling results. The proposed model with the SENet-154 encoder achieves PSNR and SNR values higher than 37 dB with the two datasets and outperforms other state-of-the-art methods (Pixel2Pixel, DiscoGAN, and BicycleGAN).</p>2024-07-15T00:00:00+02:00##submission.copyrightStatement##http://46.28.109.63/index.php/mendel/article/view/303Formation Tracking With Size Scaling of Double Integrator Agents2024-06-30T15:48:35+02:00Djati Wibowo Djamaridjati.wibowo@sampoernauniversity.ac.idAsif Awaludinasif.awaludin@brin.go.idHalimurrahman Halimurrahmanhali001@brin.go.idRommy Hartonoromm001@brin.go.idPatria Rachman Hakimpatr001@brin.go.idAdi Wirawanadiw002@brin.go.idHaikal Satrya Utamahaikal.utama@my.sampoernauniversity.ac.idTiara Kusuma Dewitiara.kusuma@sampoernauniversity.ac.id<p>This paper considers the problem of distributed formation scaling of Multi-Agent Systems (MASs) under a switching-directed graph where the scaling of formation is determined by one leader agent. A directed-sensing graph where neighboring agents exchange their relative displacement and a directed-communication graph where neighboring agents exchange the information about formation scaling factor and velocity factors are used in this paper. One leader agent which decides the formation scaling factor as well as the velocity of the group is chosen among agents. It is shown that under a switching-directed graph, the group of agents achieves the desired formation pattern with the desired scaling factor as well as the desired group's velocity if the union of the sensing and communication graphs contains a directed spanning tree.</p>2024-07-15T00:00:00+02:00##submission.copyrightStatement##http://46.28.109.63/index.php/mendel/article/view/305A Non-hydrostatic Model for Simulating Dam-Break Flow Through Various Obstacles2024-06-30T15:49:38+02:00Komang Dharmawank.dharmawan@unud.ac.idPutu Veri Swastikaveriswastika@unud.ac.idG K Gandhiadigandhiadi@unud.ac.idSri Redjeki Pudjaprasetyasrpudjap@gmail.com<p>In this paper, we develop a mathematical model for modelling and simulation of the dam-break flow through various obstacles. The model used here is an extension of one-layer non-hydrostatic (NH-1L) model by considering varying channel width (Saint Venant). The capability of our proposed scheme to simulate free surface wave generated by dam-break flow through various obstacles is demonstrated, by performing two types of simulation with various obstacles, such as; bottom obstacle and channel wall contraction. It is shown that our numerical scheme can produce the correct surface wave profile, comparable with existing experimental data. We found that our scheme demonstrates the evolution of a negative wave displacement followed by an oscillating dispersive wave train. These well-captured dispersive phenomena, indicated both the appropriate numerical treatment of the dispersive term in our model and the performance of our model.</p>2024-07-15T00:00:00+02:00##submission.copyrightStatement##http://46.28.109.63/index.php/mendel/article/view/404CLaRA: Cost-effective LLMs Function Calling Based on Vector Database2024-06-30T16:48:14+02:00Miloslav Szczypkainfo@rankacy.comLukáš Jochymekinfo@rankacy.comAlena Martinkováinfo@rankacy.com<p>Since their introduction, function calls have become a widely used feature within the OpenAI API ecosystem. They reliably connect GPT’s capabilities with external tools and APIs, and they quickly found their way into the other LLMs. The challenge one can encounter is the number of tokens consumed per request since each definition of each function is always sent as an input. We propose a simple solution to effectively decrease the number of tokens by sending only the function corresponding to the user question. The solution is based on saving the functions<br>to the vector database, where we use the similarity score to pick only the functions that need to be sent. We have benchmarked that our solution can decrease the average prompt token consumption by 210% and the average prompt (input) price by 244% vs the default function call. Our solution is not limited to specific LLMs. It can be integrated with any LLM that supports function calls, making it a versatile tool for reducing token consumption. This means that even cheaper models with a high volume of functions can benefit from our solution.</p>2024-06-30T16:03:47+02:00##submission.copyrightStatement##