Keyoxide: https://keyoxide.org/9f193ae8aa25647ffc3146b5416f303b43c20ac3

OpenPGP: openpgp4fpr:9f193ae8aa25647ffc3146b5416f303b43c20ac3

  • 61 Posts
  • 45 Comments
Joined 2 years ago
cake
Cake day: November 8th, 2022

help-circle















  • Yuu Yin@group.ltMtoSoftware Engineering@group.ltWaterfall
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    It is almost like the things like PMBOK (which now changed to a principles-based body of knowledge)… these things have no base in scientific method (empirically-based), having origins back to all the DOD needs

    Also reminds me of this important research article “The two paradigms of software development research” posted here before https://group.lt/post/46119

    The two categories of models use substantially different terminology. The Rational models tend to organize development activities into minor variations of requirements, analysis, design, coding and testing – here called Royce’s Taxonomy because of their similarity to the Waterfall Model. All of the Empirical models deviate substantially from Royce’s Taxonomy. Royce’s Taxonomy – not any particular sequence – has been implicitly co-opted as the dominant software development process theory [5]. That is, many research articles, textbooks and standards assume:

    1. Virtually all software development activities can be divided into a small number of coherent, loosely-coupled categories.
    2. The categories are typically the same, regardless of the system under construction, project environment or who is doing the development.
    3. The categories approximate Royce’s Taxonomy. … Royce’s Taxonomy is so ingrained as the dominant paradigm that it may be difficult to imagine a fundamentally different classification. However, good classification systems organize similar instances and help us make useful inferences [98]. Like a good system decomposition, a process model or theory should organize software development activities into categories that have high cohesion (activities within a category are highly related) and loose coupling (activities in different categories are loosely related) [99].

    Royce’s Taxonomy is a problematic classification because it does not organize like with like. Consider, for example, the design phase. Some design decisions are made by “analysts” during what appears to be “requirements elicitation”, while others are made by “designers” during a “design meeting”, while others are made by “programmers” while “coding” or even “testing.” This means the “design” category exhibits neither high cohesion nor loose coupling. Similarly, consider the “testing” phase. Some kinds of testing are often done by “programmers” during the ostensible “coding” phase (e.g. static code analysis, fixing compilation errors) while others often done by “analysts” during what appears to be “requirements elicitation” (e.g. acceptance testing). Unit testing, meanwhile, includes designing and coding the unit tests.





  • Wow, this is truly good, as long ago I did read many delays on public healthcare services are due to no-shows. I liked the fact that with the information of who were more likely to no-show, UHP then contacted these people.

    UHP was able to cut no-shows for patients who were highly likely to not to show up, by more than half. That patient population went from a dismal 15.63% show rate to a 39.77%. A dramatic increase. At the same time, patients in the moderate category improved from a 42.14% show rate to 50.22%.

    Of course, this article sounds like an ad for eClinicalWorks, but interesting and very good application of AI regardless.



  • Yuu Yin@group.lttoLinux@lemmy.mlDownsides of Flatpak
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Well; darwin users, just as linux users, should also work on making packages available to their platforms as Nix is still in its adoption phase. There are many already. IIRC I, who never use MacOS, made some effort into making 1 or 2 packages (likely more) to build on darwin.





  • Yuu Yin@group.lttoLinux@lemmy.mlDownsides of Flatpak
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 year ago

    When I was packaging Flatpaks, the greatest downside is

    No built in package manager

    There is a repo with shared dependencies, but it is very few. So needs to package all the dependencies… So, I personally am not interested in packaging for flatpak other than in very rare occasions… Nix and Guix are definitely better solutions (except the isolation aspect, which is not a feature, you need to do it manually), and one can use at many distros; Nix even on MacOS!





  • Using as backend for a very important Web app (with possible IoT applications in the very future also) for me which I already conceptualized, have some prototypes, etc—this is what motivates me. I feel, for this project in specific, I shall first learn the offficial Book (which I am) and have a play with the recommended libraries and the take of Rust on Nails. I also have many other interesting projects in mind, and want to contribute to e.g. Lemmy (I have many Rust projects git cloned, including it).







  • I had listened to it when you originally posted and had made some annotations, commenting some now

    Lamport talks about all this “developers shall be ENGINEERS and know their math”, BUT most software engineering positions are not engineering and even less approach classical engineering. BECAUSE why spend effort learning math WHEN one can use all constructed abstractions to have a greater return on investment with less effort? I do not think people who do high level development need to know their math that they won’t use anyway; but those jobs will likely be automated earlier.

    I think, of course, actual engineering comes down when one needs to do lower level development, depending on project domain, or things that need to be correct. I mean, systems cannot be actually 100% correct including the fact chips are proprietary so no way to fully verify.

    Interesting to mention on the clocks paper and mention on actual implicit insight is on system’s components using the same commands/inputs/computations to have a same state machine, besides consensus algorithm for fault tolerance, and the mutual exclusion algorithm.

    And the ideas coming up when working on problems.